WO2020021601A1 - Image correction apparatus and image correction method - Google Patents

Image correction apparatus and image correction method Download PDF

Info

Publication number
WO2020021601A1
WO2020021601A1 PCT/JP2018/027529 JP2018027529W WO2020021601A1 WO 2020021601 A1 WO2020021601 A1 WO 2020021601A1 JP 2018027529 W JP2018027529 W JP 2018027529W WO 2020021601 A1 WO2020021601 A1 WO 2020021601A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
correction
imaging target
imaging
time
Prior art date
Application number
PCT/JP2018/027529
Other languages
French (fr)
Japanese (ja)
Inventor
孝一 折戸
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to JP2019504145A priority Critical patent/JP6502003B1/en
Priority to PCT/JP2018/027529 priority patent/WO2020021601A1/en
Priority to CN201880095738.7A priority patent/CN112425147B/en
Publication of WO2020021601A1 publication Critical patent/WO2020021601A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B15/00Special procedures for taking photographs; Apparatus therefor
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B19/00Cameras
    • G03B19/02Still-picture cameras
    • G03B19/04Roll-film cameras
    • G03B19/07Roll-film cameras having more than one objective
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B5/00Adjustment of optical system relative to image or object surface other than for focusing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to an image correction device and an image correction method for correcting an image obtained by an imaging device that images a moving imaging target.
  • Patent Literature 1 discloses a technique for correcting a shake occurring in an image captured by an infrared camera of a ladle in a steelworks.
  • the present invention has been made in view of the above, and an object of the present invention is to provide an image correction apparatus that appropriately corrects a blur that occurs in a captured image of a moving imaging target and that is caused by integration of luminance values.
  • the present invention provides an image obtained at a time when one frame is formed by a first imaging device that images a moving imaging target at the end of the above-described time. Is divided into a final time image that is an image formed at the time and a non-final time image that is an image that is formed before the final time of the above time, and each of the final time image and the non-final time image is A dividing unit that divides the final time image and the non-final time image into a plurality of regions for each exposure time required when the respective portions are formed, based on the position of the imaging target, and a final unit obtained by the dividing unit.
  • a correction unit configured to correct the luminance value of each of the plurality of regions included in the time image and the non-final time image.
  • the correction unit calculates, for a plurality of regions included in the final time image, a luminance value of the correction target region and a first correction value corresponding to an exposure time required when the correction target region is formed.
  • the brightness value of the area to be corrected is corrected by the first correction corresponding to the exposure time required when the area to be corrected is formed.
  • the correction is performed using the value and the second correction value corresponding to the non-final time image.
  • the image correction device has an effect that a moving image capturing target can be appropriately corrected for a blur that occurs in a captured image and that is caused by integrating luminance values.
  • FIG. 1 is a diagram showing a configuration of an imaging system according to a first embodiment.
  • FIG. 4 is a diagram illustrating an example of an image obtained by capturing an image of a moving target by each of a first imaging device and a second imaging device included in the imaging system according to the first embodiment;
  • FIG. 1 is a diagram showing a configuration of an image correction device according to a first embodiment.
  • FIG. 3 is a diagram for explaining functions of a division unit included in the image correction apparatus according to the first embodiment.
  • FIG. 4 is a diagram for explaining a first correction value stored in a correction value storage unit included in the image correction apparatus according to the first embodiment.
  • 5 is a flowchart illustrating a procedure of an operation of the image correction apparatus according to the first embodiment;
  • FIG. 4 is a diagram illustrating an example of an image obtained by capturing an image of a moving target by each of a first imaging device and a second imaging device included in the imaging system according to the first embodiment
  • FIG. 1 is a diagram showing
  • FIG. 4 is a diagram schematically illustrating a distribution of luminance values obtained by performing correction by a correction unit included in the image correction apparatus according to the first embodiment
  • FIG. 7 is a diagram for explaining correction performed by the image correction apparatus according to the first embodiment.
  • FIG. 9 is a diagram for explaining functions of a division unit included in the image correction apparatus according to the second embodiment.
  • FIG. 9 is a diagram for explaining a second correction value stored in a correction value storage unit included in the image correction device according to the second embodiment.
  • FIG. 7 is a diagram illustrating a configuration of an image correction device according to a third embodiment.
  • FIG. 11 is a diagram for explaining a function of an interpolation image generation unit included in the image correction device according to the third embodiment.
  • FIG. 3 is a diagram showing a situation in which the image correction device according to the first to third embodiments is connected to an encoder and a control device.
  • FIG. 1 is a diagram illustrating a configuration of an imaging system 50 according to the first embodiment.
  • the imaging system 50 includes the image correction device 1 that corrects an image. Details of the image correction device 1 will be described later.
  • the imaging system 50 includes a first imaging device 21 that captures the moving imaging target 23 at a first frame rate, and a first imaging device 21 that is higher than the first frame rate when the first imaging device 21 captures the imaging target 23.
  • a second imaging device 22 that images the imaging target 23 at a frame rate of 2.
  • the first imaging device 21 and the second imaging device 22 are arranged side by side.
  • FIG. 1 also shows an imaging target 23.
  • the frame rate means the number of frames obtained by imaging per unit time.
  • An example of the first imaging device 21 is an infrared camera.
  • An example of the second imaging device 22 is a visible light camera.
  • An example of the first frame rate is 60 fps, and an example of the second frame rate is 1200 fps.
  • the first frame rate is 60 fps and the second frame rate is 180 fps.
  • the angle of view of the first imaging device 21 and the angle of view of the second imaging device 22 are the same.
  • FIG. 1 also shows a belt conveyor 24.
  • the imaging target 23 moves in the direction of the arrow 25.
  • An example of the imaging target 23 is a cardboard made by a hot melt process. In the first embodiment, it is assumed that the imaging target 23 moves linearly at a constant speed.
  • FIG. 2 is a diagram illustrating an example of an image obtained by each of the first imaging device 21 and the second imaging device 22 included in the imaging system 50 according to the first embodiment capturing the moving imaging target 23. It is. Specifically, FIG. 2A shows an example of an image obtained by capturing an image of the imaging target 23 that the first imaging device 21 moves. FIG. 2B illustrates an example of an image obtained by imaging the imaging target 23 that the second imaging device 22 moves. In addition, in each figure after FIG. 2, except for FIG. 13, the imaging object 23 is shown by the triangle for convenience of explanation. In the following, it is assumed that the imaging target 23 is a triangle.
  • the second imaging device 22 controls the imaging target 23 at each time of 1/180 seconds, 2/180 seconds, and 3/180 seconds after the first time of imaging. Get an image.
  • the first imaging device 21 obtains an image of the imaging target 23 1/60 seconds after the initial time of imaging, that is, 3/180 seconds.
  • the time from 1/60 seconds after the initial time of imaging is the time when one frame is formed by the first imaging device 21. That is, the time 3/180 seconds after the first time of the imaging in FIG. 2 is the last time of the time when one frame is formed by the first imaging device 21. That is, the image 3/180 seconds after the initial time of imaging in FIG. 2A is an image obtained at the time when one frame is formed by the first imaging device 21. 2A and 2B, the image of the imaging target 23 described at the first time of imaging is captured by the second imaging device 22 at the first time of imaging. This is the image obtained by this.
  • the first frame rate is 60 fps
  • the second frame rate is 180 fps. That is, the time when the first imaging device 21 obtains one image is three times the time when the second imaging device 22 obtains one image.
  • the imaging target 23 is moving. Therefore, as shown in the image 1/60 second after the initial time of imaging in FIG. 2A, the first imaging device 21 performs the first imaging 1/60 second after the initial time of imaging.
  • a blurred image 23a showing the imaging target 23 at each position from the first time of the imaging to 1/60 seconds after is obtained.
  • the first imaging device 21 obtains an image of the imaging target 23 1/60 seconds after the first time of the imaging, and moves the imaging device 1/60 seconds after the first time of the imaging.
  • a blurred image 23a generated by integrating the luminance values of a part of the target 23 is obtained.
  • the first imaging device 21 obtains the image of the imaging target 23 and the blurred image 23a 1/60 second after the first time of the imaging.
  • the blurred image 23a is an image that cannot be obtained by the second imaging device 22 1/60 seconds after the first time of imaging.
  • the image of the imaging target 23 obtained by the first imaging device 21 1/60 second after the first time of the imaging includes a portion in which the luminance values of a part of the moving imaging target 23 are integrated.
  • the image correction device 1 obtains an image similar to the image obtained by the second imaging device 22 every 1/60 second. The purpose is to:
  • the image correction device 1 obtains an image of the imaging target 23 from an image including the blurred image 23a and the image of the imaging target 23, such as the image 1/60 seconds after the initial time of imaging in FIG. Is corrected. That is, the image correction device 1 corrects the image obtained by the first imaging device 21 that captures the moving imaging target 23.
  • FIG. 3 is a diagram illustrating a configuration of the image correction device 1 according to the first embodiment.
  • the image correction device 1 includes a trigger generation unit 2 that generates an imaging trigger for causing each of the first imaging device 21 and the second imaging device 22 to start imaging.
  • FIG. 3 also shows a first imaging device 21 and a second imaging device 22.
  • the image correction device 1 includes a communication unit 3 that simultaneously transmits the imaging trigger generated by the trigger generation unit 2 to each of the first imaging device 21 and the second imaging device 22.
  • the first imaging device 21 and the second imaging device 22 receive the imaging trigger and start imaging the imaging target 23.
  • the communication unit 3 has a function of receiving image data obtained when the first imaging device 21 images the imaging target 23 from the first imaging device 21, and a function of the second imaging device 22 imaging the imaging target 23. And a function of receiving the image data obtained at this time from the second imaging device 22.
  • the image correction device 1 further includes an image storage unit 4 that stores the image data received by the communication unit 3 from the first imaging device 21 and the image data received from the second imaging device 22.
  • the image correction device 1 extracts the feature points of the image of the imaging target 23 based on the image data stored in the image storage unit 4 and the image data received by the communication unit 3 from the second imaging device 22. It further has a feature extraction unit 5.
  • An example of a feature point is an edge of the imaging target 23. In the first embodiment, the edges are each of the three vertices of the triangular imaging target 23. The shape and size of the triangular imaging target 23 are specified based on the three vertices.
  • the image correction device 1 further has a feature storage unit 6 that stores data indicating feature points extracted by the feature extraction unit 5.
  • the image correction apparatus 1 further includes a movement amount calculation unit 7 that calculates a movement amount per unit time of the feature point indicated by the data stored in the feature storage unit 6. Specifically, the moving amount calculation unit 7 forms one frame by the second imaging device 22 based on the corresponding feature points in two adjacent frames imaged by the second imaging device 22. The movement amount of the feature point at a certain time is calculated.
  • the image correction apparatus 1 further includes a movement amount storage unit 8 that stores data indicating the movement amount per unit time of the feature point obtained by the movement amount calculation unit 7 performing the calculation.
  • the image correction device 1 extracts feature points of a correction target image which is image data stored in the image storage unit 4 and is an image based on the image data received by the communication unit 3 from the first imaging device 21. And a correction image specifying unit 9 for specifying a correction target image. Specifically, the corrected image specifying unit 9 sets a trapezoidal image including the image of the imaging target 23 and the blurred image 23a 1/60 seconds after the initial time of the imaging in FIG. Identify.
  • the feature storage unit 6 also stores data indicating the feature points extracted by the corrected image specifying unit 9. More specifically, the feature storage unit 6 also stores data indicating the correction target image.
  • the image correction device 1 forms a correction target image, which is an image obtained at the time when one frame is formed by the first imaging device 21 that captures the moving imaging target 23, at the last time of the time.
  • the image processing apparatus further includes a division unit that divides the image into a final time image that is an image and a non-final time image that is an image formed before the last time of the time.
  • the dividing unit 10 separates each of the final time image and the non-final time image based on the position of the imaging target 23 for each exposure time required when each part of the final time image and the non-final time image is formed. Divide into multiple areas.
  • the position of the imaging target 23 is specified by the image data received by the communication unit 3 from the second imaging device 22. More specifically, the position of the imaging target 23 is specified by the data stored in the movement amount storage unit 8.
  • the dividing unit 10 also stores the shape and size of the imaging target 23 of the captured image based on the data stored in the feature storage unit 6 and the data stored in the movement amount storage unit 8.
  • the correction target image specified by the correction image specifying unit 9 based on the position of the imaging target 23 and the position of the imaging target 23 1/60 seconds after the initial time of the imaging in FIG.
  • the image is divided into a final time image as an image and a non-final time image as a blurred image 23a.
  • FIG. 4 is a diagram for explaining a function of the dividing unit 10 included in the image correction device 1 according to the first embodiment. More specifically, FIG. 4 is a diagram illustrating a correction target image including the image of the imaging target 23 and the blurred image 23a 1/60 seconds after the first time of the imaging in FIG. 2A. As described above, the imaging target 23 linearly moves at a constant speed. Therefore, after 1/60 second from the first time of imaging, a correction target image in which a triangular image having three points P0, Q0, and R0 in FIG. 4 moves in the direction of arrow 25a in order Is obtained by the first imaging device 21.
  • the first imaging device 21 obtains a triangular image having three points P0, Q0, and R0 in FIG.
  • the first imaging device 21 obtains a triangular image having three points P1, P1 and R1 in FIG. 2/180 seconds after the first image capturing time
  • the first image capturing apparatus 21 obtains a triangular image having three points P2, Q2, and R2 in FIG.
  • 1/60 second which is 3/180 seconds after the first time of imaging, a triangular image having three vertices of points P3, Q3, and R3 in FIG.
  • a trapezoidal correction target image having the vertices is obtained by the first imaging device 21.
  • “y0” indicates the amount of movement of the triangular image in each 1/180 second when 1/60 second from 1/60 second after the first time of imaging is divided into three equal parts. ing. Data indicating the movement amount is stored in the movement amount storage unit 8. Since the imaging target 23 moves linearly at a constant speed, the movement amount y0 of the triangular image every 1/180 second is constant.
  • the image is taken by the device 21.
  • a code B is assigned to four regions forming a triangular image having three points P3, Q3, and R3.
  • the image constituted by all the regions to which the reference symbol B is assigned is the last time image formed at the last time of the time when the correction target image is formed.
  • the exposure time required for forming each of the four regions to which the code B is assigned is different from the exposure time required for forming the other regions.
  • the image constituted by all the regions to which the reference symbol A is assigned is a non-final time image formed before the last time of the exposure time.
  • a part of the triangle image at the first time of imaging is a triangle image 1/180 second after the first time of imaging, a triangle image two / 180 second after the first time of imaging, and It overlaps with the image of the triangle 3/180 seconds after the first time.
  • a part of the triangular image 1/180 second after the initial time of imaging is a triangle image 2/180 second after the initial time of imaging and 3/180 second after the initial time of imaging. Overlap with the triangle image.
  • a part of the triangular image 2/180 seconds after the first image capturing time overlaps with the triangular image 3/180 seconds after the first image capturing time.
  • the numbers added to the right side of the code A or the code B indicate the 1/60 second from 1/60 second after the first time of the imaging to be divided into three equal parts in each 1/180 second. This is obtained by adding “1” to the number of times a part of the above triangular image overlaps.
  • the number of overlaps corresponds to the exposure time. Specifically, the exposure time required when the area to which the code A1 is assigned is formed is shorter than the exposure time required when the area to which the code A2 is assigned is formed. The exposure time required when the area to which the code A2 is assigned is formed is shorter than the exposure time required when the area to which the code A3 is assigned is formed.
  • the exposure time required when the area to which the code B1 is assigned is formed is shorter than the exposure time required when the area to which the code B2 is assigned is formed.
  • the exposure time required when the area to which the code B2 is assigned is formed is shorter than the exposure time required when the area to which the code B3 is assigned is formed.
  • the exposure time required when the area to which the code B3 is assigned is formed is shorter than the exposure time required when the area to which the code B4 is assigned is formed.
  • the dividing unit 10 separates each of the final time image and the non-final time image based on the position of the imaging target 23 for each exposure time required when each part of the final time image and the non-final time image is formed. Divide into multiple areas.
  • the position of the imaging target 23 is specified by the image data received by the communication unit 3 from the second imaging device 22. Specifically, the position of the imaging target 23 is specified by data stored in the movement amount storage unit 8.
  • the dividing unit 10 divides the last time image into an area to which the code B1 is assigned, an area to which the code B2 is assigned, an area to which the code B3 is assigned, and The area is divided into areas to which B4 is assigned.
  • the dividing unit 10 divides the non-final time image into an area to which the code A1 is assigned, an area to which the code A2 is assigned, and an area to which the code A3 is assigned.
  • the image correction device 1 further includes a correction value storage unit 11 that stores a first correction value and a second correction value used when correcting an image.
  • FIG. 5 is a diagram for explaining the first correction value stored in the correction value storage unit 11 included in the image correction device 1 according to the first embodiment.
  • the first correction value corresponds to the exposure time required when the region to be corrected is formed.
  • FIG. 5 shows an example in which the first correction value decreases in proportion to the exposure time.
  • FIG. 5 shows four first correction values ⁇ 1, ⁇ 2, ⁇ 3, and ⁇ 4.
  • ⁇ 1 is a first correction value used when correcting the luminance value of the area to which the code A1 and the code B1 are assigned.
  • ⁇ 2 is a first correction value used when correcting the luminance value of the area to which the code A2 and the code B2 are assigned.
  • ⁇ 3 is a first correction value used when correcting the luminance value of the area to which the code A3 and the code B3 are assigned.
  • ⁇ 4 is a first correction value used when correcting the luminance value of the area to which the code B4 is assigned.
  • the image correction device 1 further includes a correction unit 12 that corrects the luminance value of each of the plurality of regions included in the final time image and the non-final time image obtained by the division unit 10. For a plurality of areas included in the final time image, the correction unit 12 calculates the brightness value of the correction target area by a first correction value corresponding to the exposure time required when the correction target area was formed. Correct using. The first correction value is stored in the correction value storage unit 11.
  • the correction unit 12 determines the luminance value of the area to which the code B1 is assigned when the area to which the code B1 is assigned is formed. The correction is performed using the first correction value ⁇ 1 corresponding to the required exposure time.
  • the correction unit 12 determines the luminance value of the area to which the code B2 is assigned when the area to which the code B2 is assigned is formed. The correction is performed using the first correction value ⁇ 2 corresponding to the required exposure time.
  • the correction unit 12 determines the luminance value of the correction target area by a first correction corresponding to the exposure time required when the correction target area was formed. The correction is performed using the value and the second correction value corresponding to the non-final time image.
  • the first correction value and the second correction value are stored in the correction value storage unit 11.
  • the second correction value is a value smaller than one.
  • the correction unit 12 determines the luminance value of the area to which the code A1 is assigned when the area to which the code A1 is assigned is formed. The correction is performed using the first correction value ⁇ 1 corresponding to the required exposure time and the second correction value corresponding to the non-final time image.
  • the correction unit 12 determines the luminance value of the area to which the code A2 is assigned when the area to which the code A2 is assigned is formed. The correction is performed using the first correction value ⁇ 2 corresponding to the required exposure time and the second correction value corresponding to the non-final time image.
  • the correction unit 12 Corrects the luminance value before correction according to the following equation (1).
  • X is A or B
  • n is any integer from 1 to 4.
  • ⁇ 0 is a second correction value.
  • the image correction device 1 further includes a corrected image generation unit 13 that generates a corrected image based on the luminance value obtained by the correction unit 12.
  • the image correction device 1 further includes a display unit 14 that displays the corrected image generated by the corrected image generation unit 13.
  • An example of the display unit 14 is a liquid crystal display device.
  • An example of the image storage unit 4, the feature storage unit 6, the movement amount storage unit 8, and the correction value storage unit 11 included in the image correction device 1 is a semiconductor memory.
  • FIG. 6 is a flowchart of an operation procedure of the image correction apparatus 1 according to the first embodiment.
  • the trigger generation unit 2 generates an imaging trigger for causing each of the first imaging device 21 and the second imaging device 22 to start imaging.
  • the communication unit 3 simultaneously transmits the imaging trigger generated by the trigger generation unit 2 to each of the first imaging device 21 and the second imaging device 22 (S1).
  • the communication unit 3 receives image data obtained when the first imaging device 21 has imaged the imaging target 23 from the first imaging device 21, and when the second imaging device 22 has captured the imaging target 23 The obtained image data is received from the second imaging device 22 (S2).
  • the feature extracting unit 5 extracts feature points of the image of the imaging target 23 which is an image based on the image data received by the communication unit 3 from the second imaging device 22 (S3).
  • the movement amount calculation unit 7 calculates the movement amount per unit time of the feature point extracted by the feature extraction unit 5 (S4).
  • the correction image specifying unit 9 specifies a correction target image by extracting feature points of an image based on the image data received by the communication unit 3 from the first imaging device 21 (S5).
  • the division unit 10 converts the correction target image, which is an image obtained by the first imaging device 21 that captures the moving imaging target 23, into an image formed at the last time of the time when the correction target image is formed.
  • the image is divided into a final time image and a non-final time image which is an image formed before the final time of the exposure time (S6).
  • the dividing unit 10 separates each of the final time image and the non-final time image based on the position of the imaging target 23 for each exposure time required when each part of the final time image and the non-final time image is formed. It is divided into a plurality of areas (S6).
  • the correction unit 12 corrects each luminance value of a plurality of regions included in the final time image and the non-final time image obtained by the division unit 10. That is, the correction unit 12 corrects the correction target image (S7).
  • the corrected image generation unit 13 generates a corrected image, which is a corrected image, based on the luminance values obtained by the correction unit 12 (S8).
  • the display unit 14 displays the corrected image generated by the corrected image generation unit 13 as the corrected image (S9).
  • the image correction device 1 sets the correction target image, which is an image obtained at the time when one frame is formed by the first imaging device 21 that captures the moving imaging target 23, at the final time of the time. Is divided into a final time image, which is an image formed at the time, and a non-final time image, which is an image formed before the last time of the time.
  • the image correction device 1 converts each of the final time image and the non-final time image based on the position of the imaging target 23 for each exposure time required when each part of the final time image and the non-final time image is formed. Is divided into a plurality of regions.
  • the image correction apparatus 1 converts the brightness value of the correction target area into a first correction corresponding to the exposure time required when the correction target area is formed. Correct using the value.
  • the image correction device 1 sets the brightness value of the correction target region to a first value corresponding to the exposure time required when the correction target region was formed. The correction is performed using the correction value and the second correction value corresponding to the non-final time image.
  • FIG. 7 is a diagram schematically illustrating a distribution of luminance values obtained by performing correction by the correction unit 12 included in the image correction apparatus 1 according to the first embodiment.
  • FIG. 7 shows that the luminance values of the areas to which the codes B1, B2, B3, and B4 are assigned are the same.
  • FIG. 7 shows that the brightness values of the areas to which the codes A1, A2, and A3 are assigned are smaller than the brightness values of the areas to which the codes B1, B2, B3, and B4 are assigned, and It is the same.
  • FIG. 8 is a diagram for explaining correction performed by the image correction apparatus 1 according to the first embodiment.
  • FIG. 8A is the same as FIG. 2A, and shows the correction target image at the final time of imaging.
  • FIG. 8B illustrates an image obtained by the image correction device 1 correcting the correction target image at the final time of imaging.
  • the image shown at the final time of imaging in FIG. 8B is an image obtained based on the distribution of luminance values in FIG.
  • FIG. 8 (C) is the same as FIG. 2 (B), and shows the first time of imaging, the time 1/180 second after the beginning of imaging, and the time 2/180 second after the beginning of imaging.
  • an image obtained by the second imaging device 22 at a time 3/180 seconds after the start of imaging.
  • the image correction apparatus 1 can obtain an image of the imaging target 23 that does not include the blurred image 23a at a time 3/180 seconds after the start of imaging. .
  • the image correction apparatus 1 can appropriately correct a blur that occurs in a captured image of a moving imaging target and is caused by the integration of luminance values. More specifically, when the use of the second imaging device 22 is prohibited and the first imaging device 21 is used without using the second imaging device 22, , Every 1/60 second, an image similar to the image obtained by the second imaging device 22 can be obtained.
  • the image correction device 1 is configured to correct the luminance value after the first imaging device 21 has formed one frame in consideration of the exposure time. 23 images can be obtained. That is, when the first imaging device 21 is an infrared camera used to determine whether or not the corrugated cardboard produced by the hot melt process is properly produced, the image correction device 1 performs the hot melt process. It is possible to generate an image for determining whether or not the boxed cardboard is properly boxed. Since the image correction device 1 does not perform correction for each pixel but performs correction for each region where a plurality of pixels are collected, the correction can be performed relatively quickly.
  • the data indicating the shape and the size of the imaging target 23 of the captured image may be stored in the feature storage unit 6 in advance. More specifically, data indicating the shape and size of the imaging target 23 of the image captured by the second imaging device 22 may be stored in the feature storage unit 6 in advance.
  • Embodiment 2 FIG. Next, an image correction device 1 according to the second embodiment will be described.
  • the imaging target 23 moves linearly at a constant speed.
  • the imaging target 23 performs an acceleration motion.
  • the configuration of the image correction device 1 according to the second embodiment is the same as the configuration of the image correction device 1 according to the first embodiment.
  • portions different from the first embodiment will be described.
  • the dividing unit 10 divides the non-final time image into a plurality of regions corresponding to the exposure time when each part of the non-final time image is formed and the acceleration of the imaging target 23.
  • FIG. 9 is a diagram for explaining functions of the dividing unit 10 included in the image correction device 1 according to the second embodiment.
  • the first imaging device 21 obtains a triangular image having three points P0, Q0, and R0 in FIG.
  • the first imaging device 21 obtains a triangular image having three vertices of the points P1, Q1 and R1 in FIG.
  • the first imaging device 21 obtains a triangular image having three vertices of points P2, Q2, and R2 in FIG.
  • a triangular image having three vertices of point P3, point Q3 and point R3 in FIG. Can be The images obtained at each of the above four times are combined, and the point P0, the point Q0, the point R3, and the point P3 are set to 4 at 1/60 second, which is 3/180 seconds after the first time of imaging.
  • a trapezoidal correction target image having the vertices is obtained by the first imaging device 21.
  • the movement amount of the above triangular image is y1 from the first time of imaging to 1/180 second, and y2 from 1/180 second to 2/180 second. And y3 from 2/180 seconds to 3/180 seconds.
  • the imaging target 23 performs an acceleration motion. Therefore, each of the movement amounts y1, y2, and y3 is different from the other two movement amounts.
  • Each of the movement amount y1, the movement amount y2, and the movement amount y3 corresponds to the acceleration of the imaging target 23.
  • the code B is assigned to the four regions constituting the triangular image having the points P3, Q3 and R3 at the three vertices, as in the first embodiment.
  • the image constituted by all the areas to which the reference symbol B is assigned is the last time image formed at the last time of the time when the correction target image is formed.
  • the image constituted by all the regions to which the code A is assigned is a non-final time image formed before the last time of the time when the correction target image is formed.
  • the dividing unit 10 divides the non-final time image into a plurality of regions corresponding to the exposure time and the acceleration of the imaging target when each part of the non-final time image is formed. That is, since the moving amount y1, the moving amount y2, and the moving amount y3 each correspond to the acceleration of the imaging target 23, the dividing unit 10 sets the non-final time image to any one of the codes A4 to A9. Are divided into six areas to which are assigned.
  • FIG. 10 is a diagram for explaining the second correction value stored in the correction value storage unit 11 included in the image correction device 1 according to the second embodiment.
  • the second correction value according to the second embodiment corresponds to the non-final time image and differs depending on the acceleration of the imaging target 23.
  • FIG. 10 shows an example in which the second correction value decreases as the moving amount of the triangle image per 1/180 second increases.
  • the second correction value when the amount of movement of the triangle image per 1/180 second is y1 is ⁇ 1.
  • the second correction value when the moving amount of the triangle image per 1/180 second is y2 is ⁇ 2.
  • the second correction value when the moving amount of the triangle image per 1/180 second is y3 is ⁇ 3.
  • the first correction value is ⁇ 1, and the second correction value is ⁇ 1.
  • the first correction value is ⁇ 1 and the second correction value is ⁇ 2 for the area to which the reference symbol A5 is assigned.
  • the first correction value is ⁇ 1, and the second correction value is ⁇ 3, for the area to which the code A6 is assigned.
  • the first correction value is ⁇ 2 and the second correction value is ⁇ 2 for the area to which the code A7 is assigned.
  • the first correction value is ⁇ 2 and the second correction value is ⁇ 3 for the area to which the code A8 is assigned.
  • the first correction value is ⁇ 3 and the second correction value is ⁇ 3 for the area to which the code A9 is assigned.
  • the correction unit 12 determines the luminance value of the correction target area by a first correction corresponding to the exposure time required when the correction target area was formed. The correction is performed using the value and the second correction value corresponding to the acceleration of the imaging target 23 when the region to be corrected is formed. The correction unit 12 corrects the luminance value of the correction target area for the plurality of areas included in the final time image in the same manner as described in the first embodiment.
  • the correction unit 12 When the brightness value before correction of the area to which the code Xn is assigned is Xn and the brightness value after correction of the area to which the code Xn is assigned is Xn ′, for example, the correction unit 12 The luminance value before correction is corrected according to equation (2). Note that X is A or B, and n is any integer from 1 to 9.
  • the image correction apparatus 1 converts the non-final time image into a plurality of areas corresponding to the exposure time when each part of the non-final time image is formed and the acceleration of the imaging target 23.
  • Divided into The second correction value according to the second embodiment corresponds to the non-final time image and differs depending on the acceleration of the imaging target 23.
  • the correction unit 12 determines the luminance value of the correction target area by a first correction corresponding to the exposure time required when the correction target area was formed. The correction is performed using the value and the second correction value corresponding to the acceleration of the imaging target 23 when the region to be corrected is formed.
  • the correction unit 12 corrects the luminance value of the correction target area for the plurality of areas included in the final time image in the same manner as described in the first embodiment.
  • the image correction apparatus 1 according to the second embodiment corrects the correction target image even when the imaging target 23 performs an acceleration motion, so that the blurred image is obtained at the final time of the imaging.
  • An image of the imaging target 23 that does not include 23a can be obtained.
  • the image correction apparatus 1 according to the second embodiment converts an image of the imaging target 23 whose luminance value after one frame has been formed by the first imaging apparatus 21 in consideration of the exposure time. Obtainable.
  • the image correction apparatus 1 corrects the correction target image by the method described in the first embodiment without considering the acceleration of the imaging target 23 even when the imaging target 23 performs an acceleration motion. You may. Even in such a case, the image correction apparatus 1 can generate a corrected image in which the influence of the blurred image 23a is smaller than before and the exposure time is considered.
  • FIG. 11 is a diagram illustrating a configuration of an image correction device 1A according to the third embodiment.
  • the image correction device 1A has all the components of the image correction device 1 according to the first or second embodiment.
  • the image correction device 1A has components other than the components of the image correction device 1.
  • portions different from the first or second embodiment will be described.
  • the image correction device 1A further includes a corrected image storage unit 15 that stores data indicating the corrected image generated by the corrected image generation unit 13.
  • An example of the corrected image storage unit 15 is a semiconductor memory.
  • the image correction device 1A includes an interpolation image generation unit 16 that generates an interpolation image based on the first image based on the data stored in the correction image storage unit 15 and the position of the imaging target 23.
  • the first image is an image obtained by the correction unit 12 performing the correction.
  • the position of the imaging target 23 is specified based on the data stored in the movement amount storage unit 8. More specifically, the position of the imaging target 23 is specified by the image data received by the communication unit 3 from the second imaging device 22.
  • the first image is an image generated by the corrected image generation unit 13 after the correction unit 12 performs the correction.
  • the interpolated image includes a first time at which an uncorrected image of the first image is obtained, and an uncorrected image of a second image obtained after the first image by the correction unit 12 performing the correction. It is an image of the imaging target 23 at a time between the obtained second time and the second time.
  • the display unit 14 also displays the interpolation image generated by the interpolation image generation unit 16.
  • FIG. 12 is a diagram for explaining the function of the interpolation image generation unit 16 included in the image correction device 1A according to the third embodiment.
  • FIG. 12A illustrates a first image at a first time and a second image at a second time.
  • FIG. 12B shows the interpolation image 23b generated by the interpolation image generation unit 16 at an intermediate time between the first time and the second time in addition to the first image and the second image. Is shown.
  • the image of the imaging target 23 is captured by the first imaging device 21 capturing the imaging target 23 at an intermediate time.
  • the image of the imaging target 23 at the intermediate time shifts the imaging target 23 of the first image from the position of the imaging target 23 of the first image to the position of the imaging target 23 of the second image in the second direction. It is assumed that the image has been moved by the movement amount Z from the position of the imaging target 23 of one image.
  • the movement amount Z is specified based on the position of the imaging target 23. Specifically, the movement amount Z is specified based on the data stored in the movement amount storage unit 8. More specifically, the movement amount Z is specified by image data received by the communication unit 3 from the second imaging device 22.
  • the interpolated image generation unit 16 generates an interpolated image 23b, which is an image of the imaging target 23 at an intermediate time, based on the first image and the movement amount Z, as shown in FIG. Specifically, the interpolation image generation unit 16 moves the imaging target 23 of the first image from the position of the imaging target 23 of the first image by the moving amount Z in the direction of the position of the imaging target 23 of the second image. By moving it, an interpolated image 23b, which is an image of the imaging target 23 at an intermediate time, is generated.
  • the image correction device 1 based on the first image and the position of the imaging target 23, specifically, based on the first image and the movement amount Z, the image correction device 1 ⁇ / b> A , An interpolation image 23b that is an image of the imaging target 23 at an intermediate time is generated. That is, the image correction device 1A can present the image of the imaging target 23 at an intermediate time to the user in addition to the first image and the second image. The user can confirm the movement of the imaging target 23 by visually recognizing the interpolated image 23b which is an image of the imaging target 23 at an intermediate time.
  • the image correction devices 1 and 1A may be connected to an encoder and a control device.
  • FIG. 13 is a diagram illustrating a situation where the image correction devices 1 and 1A according to the first to third embodiments are connected to the encoder 26 and the control device 27.
  • the encoder 26 is a device that detects the position of the imaging target 23.
  • the control device 27 is a device that controls the movement speed of the imaging target 23 by controlling the belt conveyor 24.
  • the communication unit 3 receives information related to the position of the imaging target 23 from one or both of the encoder 26 and the control device 27.
  • the position of the imaging target 23 or the acceleration of the imaging target 23 may be specified by the information received by the communication unit 3.
  • the encoder 26 may be replaced with a sensor that detects the position or speed of the imaging target 23.
  • the communication unit 3 receives information related to the position of the imaging target 23 from the sensor.
  • the first imaging device 21 may be a device that detects depth or sound and generates image data based on the detection result.
  • Some or all of the functions of 13 may be implemented by a processor that executes a program stored in a memory.
  • the processor is a CPU (Central Processing Unit), a processing device, an arithmetic device, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
  • CPU Central Processing Unit
  • processing device an arithmetic device
  • microprocessor a microprocessor
  • microcomputer a microcomputer
  • DSP Digital Signal Processor
  • Part or all of the functions of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13 are realized by a processor.
  • some or all of the functions are realized by a processor and software, firmware, or a combination of software and firmware.
  • Software or firmware is described as a program and stored in a memory. The processor reads out and executes the program stored in the memory, so that the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the corrected image identification unit 9, the division unit 10, the correction unit 12 And part or all of the functions of the corrected image generation unit 13 are realized.
  • the image correction device 1 includes the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13. It has a memory for storing a program in which the steps performed by some or all will be executed as a result.
  • the programs stored in the memory include a part of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13.
  • the computer is caused to execute a procedure or a method that is entirely executed.
  • the memory may be, for example, a non-volatile memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (registered trademark) (Electrically Erasable Programmable Read-Only Memory), or the like. It is a volatile semiconductor memory, a magnetic disk, a flexible disk, an optical disk, a compact disk, a mini disk, a DVD (Digital Versatile Disk), or the like.
  • a non-volatile memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (registered trademark) (Electrically Erasable Programmable Read-Only Memory), or the like. It is a volatile semiconductor memory, a magnetic disk, a flexible disk, an optical disk, a compact disk, a mini disk, a DVD (Digital Versatile Dis
  • Some or all of the functions of 13 may be realized by a processing circuit.
  • the processing circuit is dedicated hardware.
  • the processing circuit is, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a combination thereof. is there.
  • Some of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13 are dedicated to the individual parts different from the rest. Hardware may be used.
  • a plurality of functions of the trigger generation unit 2 the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13, Some may be realized by software or firmware, and the rest of the plurality of functions may be realized by dedicated hardware. As described above, a plurality of functions of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13 are implemented by hardware. It can be realized by software, firmware, firmware, or a combination thereof.
  • Trigger generation unit 2 communication unit 3, feature extraction unit 5, movement amount calculation unit 7, correction image identification unit 9, division unit 10, correction unit 12, correction image generation unit included in image correction device 1A according to the third embodiment.
  • Some or all of the functions of the interpolation image generation unit 13 and the interpolation image generation unit 16 may be realized by a processor.
  • the image correction apparatus 1A includes a trigger generation unit 2, a communication unit 3, a feature extraction unit 5, a movement amount calculation unit 7, a corrected image identification unit 9, and a division unit 10.
  • a memory for storing a program in which the steps executed by some or all of the correction unit 12, the correction image generation unit 13 and the interpolation image generation unit 16 are executed as a result.
  • Trigger generation unit 2 communication unit 3, feature extraction unit 5, movement amount calculation unit 7, correction image identification unit 9, division unit 10, correction unit 12, correction image generation unit included in image correction device 1A according to the third embodiment.
  • Part or all of the functions of the interpolation image generation unit 13 and the interpolation image generation unit 16 may be realized by a processing circuit.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Details Of Cameras Including Film Mechanisms (AREA)
  • Adjustment Of Camera Lenses (AREA)

Abstract

An image correction apparatus (1) comprises: a dividing unit (10) for dividing an image, obtained by a first imaging device (21) that images a moving imaging target, into a final time point image that is formed at the final time point of a time in which the image is formed and into a non-final time point image that is formed before the final time point and for dividing each of the final time point image and the non-final time point image on the basis of the position of the imaging target into a plurality of areas for each of a plurality of exposure times required when portions of each of the final time point image and the non-final time point image are formed; and a correction unit (12) for correcting the brightness value of each area. For the plurality of areas included in the final time point image, the correction unit (12) corrects the brightness value of a correction target area by use of a first correction value corresponding to an exposure time required when the correction target area is formed; and for the plurality of areas included in the non-final time point image, the correction unit (12) corrects the brightness value of a correction target area by use of the first correction value and a second correction value corresponding to the non-final time point image.

Description

画像補正装置及び画像補正方法Image correction apparatus and image correction method
 本発明は、移動する撮像対象を撮像する撮像装置によって得られた画像を補正する画像補正装置及び画像補正方法に関する。 The present invention relates to an image correction device and an image correction method for correcting an image obtained by an imaging device that images a moving imaging target.
 従来、移動する撮像対象がカメラで撮像された画像に生じるぶれを補正する技術が知られている。撮像対象が移動すると撮像対象の少なくとも一部の部位に対応する部分画像がひとつのフレームにおける複数の箇所の各々に含まれることになるので、ぶれは撮像対象の一部の部位の輝度値が積算されることによって生じる。例えば、特許文献1は、製鉄所における取鍋が赤外線カメラで撮像された画像に生じるぶれを補正する技術を開示している。 Conventionally, there is known a technique of correcting a blurring of an image of a moving imaging target that is captured by a camera. When the imaging target moves, a partial image corresponding to at least a part of the imaging target is included in each of a plurality of locations in one frame. It is caused by being done. For example, Patent Literature 1 discloses a technique for correcting a shake occurring in an image captured by an infrared camera of a ladle in a steelworks.
特開2017-187434号公報JP-A-2017-187434
 物流業界における撮像対象は、製鉄所における撮像対象である取鍋より速く移動し、物流業界における撮像対象の大きさは、取鍋に比べて著しく小さい。そのため、特許文献1が開示している技術を用いても、物流業界での移動する撮像対象が赤外線カメラで撮影された画像に生じるぶれを適切に補正することはできない。移動する撮像対象が赤外線カメラで撮影された画像に生じるぶれを適切に補正する技術が提供されることが要求されている。 撮 像 The imaging target in the logistics industry moves faster than the ladle that is the imaging target in the steelworks, and the size of the imaging target in the distribution industry is significantly smaller than the ladle. For this reason, even if the technique disclosed in Patent Document 1 is used, it is not possible to appropriately correct the blurring that occurs in an image of a moving imaging target in the logistics industry captured by an infrared camera. There is a need to provide a technique for appropriately correcting blurring of an image of a moving imaging target that is captured by an infrared camera.
 本発明は、上記に鑑みてなされたものであって、移動する撮像対象が撮影された画像に生じるぶれであって輝度値の積算によって生じるぶれを適切に補正する画像補正装置を得ることを目的とする。 The present invention has been made in view of the above, and an object of the present invention is to provide an image correction apparatus that appropriately corrects a blur that occurs in a captured image of a moving imaging target and that is caused by integration of luminance values. And
 上述した課題を解決し、目的を達成するために、本発明は、移動する撮像対象を撮像する第1の撮像装置によって1フレームが形成される時間に得られた画像を、上記の時間の最終の時刻に形成された画像である最終時刻画像と、上記の時間の最終の時刻より前に形成された画像である非最終時刻画像とに分割し、最終時刻画像及び非最終時刻画像の各々を、撮像対象の位置をもとに、最終時刻画像及び非最終時刻画像の各部分が形成された際に要した露光時間毎の複数の領域に分割する分割部と、分割部によって得られた最終時刻画像及び非最終時刻画像に含まれている複数の領域の各々の輝度値を補正する補正部とを有する。補正部は、最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、補正対象の領域が形成された際に要した露光時間に対応する第1の補正値を用いて補正し、非最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、補正対象の領域が形成された際に要した露光時間に対応する第1の補正値と非最終時刻画像に対応する第2の補正値とを用いて補正する。 In order to solve the above-described problem and achieve the object, the present invention provides an image obtained at a time when one frame is formed by a first imaging device that images a moving imaging target at the end of the above-described time. Is divided into a final time image that is an image formed at the time and a non-final time image that is an image that is formed before the final time of the above time, and each of the final time image and the non-final time image is A dividing unit that divides the final time image and the non-final time image into a plurality of regions for each exposure time required when the respective portions are formed, based on the position of the imaging target, and a final unit obtained by the dividing unit. A correction unit configured to correct the luminance value of each of the plurality of regions included in the time image and the non-final time image. The correction unit calculates, for a plurality of regions included in the final time image, a luminance value of the correction target region and a first correction value corresponding to an exposure time required when the correction target region is formed. For a plurality of areas included in the non-final time image, the brightness value of the area to be corrected is corrected by the first correction corresponding to the exposure time required when the area to be corrected is formed. The correction is performed using the value and the second correction value corresponding to the non-final time image.
 本発明にかかる画像補正装置は、移動する撮像対象が撮影された画像に生じるぶれであって輝度値の積算によって生じるぶれを適切に補正することができるという効果を奏する。 The image correction device according to the present invention has an effect that a moving image capturing target can be appropriately corrected for a blur that occurs in a captured image and that is caused by integrating luminance values.
実施の形態1にかかる撮像システムの構成を示す図FIG. 1 is a diagram showing a configuration of an imaging system according to a first embodiment. 実施の形態1にかかる撮像システムが有する第1の撮像装置及び第2の撮像装置の各々が移動する撮像対象を撮像することによって得られた画像の例を示す図FIG. 4 is a diagram illustrating an example of an image obtained by capturing an image of a moving target by each of a first imaging device and a second imaging device included in the imaging system according to the first embodiment; 実施の形態1にかかる画像補正装置の構成を示す図FIG. 1 is a diagram showing a configuration of an image correction device according to a first embodiment. 実施の形態1にかかる画像補正装置が有する分割部が有する機能を説明するための図FIG. 3 is a diagram for explaining functions of a division unit included in the image correction apparatus according to the first embodiment. 実施の形態1にかかる画像補正装置が有する補正値記憶部に記憶されている第1の補正値を説明するための図FIG. 4 is a diagram for explaining a first correction value stored in a correction value storage unit included in the image correction apparatus according to the first embodiment. 実施の形態1にかかる画像補正装置の動作の手順を示すフローチャート5 is a flowchart illustrating a procedure of an operation of the image correction apparatus according to the first embodiment; 実施の形態1にかかる画像補正装置が有する補正部が補正を行うことによって得た輝度値の分布を模式的に示す図FIG. 4 is a diagram schematically illustrating a distribution of luminance values obtained by performing correction by a correction unit included in the image correction apparatus according to the first embodiment; 実施の形態1にかかる画像補正装置が行う補正を説明するための図FIG. 7 is a diagram for explaining correction performed by the image correction apparatus according to the first embodiment. 実施の形態2にかかる画像補正装置が有する分割部が有する機能を説明するための図FIG. 9 is a diagram for explaining functions of a division unit included in the image correction apparatus according to the second embodiment. 実施の形態2にかかる画像補正装置が有する補正値記憶部に記憶されている第2の補正値を説明するための図FIG. 9 is a diagram for explaining a second correction value stored in a correction value storage unit included in the image correction device according to the second embodiment. 実施の形態3にかかる画像補正装置の構成を示す図FIG. 7 is a diagram illustrating a configuration of an image correction device according to a third embodiment. 実施の形態3にかかる画像補正装置が有する補間画像生成部の機能を説明するための図FIG. 11 is a diagram for explaining a function of an interpolation image generation unit included in the image correction device according to the third embodiment. 実施の形態1から実施の形態3にかかる画像補正装置がエンコーダと制御装置に接続されている状況を示す図FIG. 3 is a diagram showing a situation in which the image correction device according to the first to third embodiments is connected to an encoder and a control device.
 以下に、本発明の実施の形態にかかる画像補正装置及び画像補正方法を図面に基づいて詳細に説明する。なお、この実施の形態によりこの発明が限定されるものではない。 Hereinafter, an image correction apparatus and an image correction method according to an embodiment of the present invention will be described in detail with reference to the drawings. The present invention is not limited by the embodiment.
実施の形態1.
 図1は、実施の形態1にかかる撮像システム50の構成を示す図である。撮像システム50は、画像を補正する画像補正装置1を有する。画像補正装置1の詳細については、後述する。撮像システム50は、移動する撮像対象23を第1のフレームレートで撮像する第1の撮像装置21と、第1の撮像装置21が撮像対象23を撮像する際の第1のフレームレートより高い第2のフレームレートで撮像対象23を撮像する第2の撮像装置22とを更に有する。第1の撮像装置21と第2の撮像装置22とは、並べられて配置されている。図1には、撮像対象23も示されている。フレームレートは、単位時間当たりに撮像によって得られるフレームの個数を意味する。
Embodiment 1 FIG.
FIG. 1 is a diagram illustrating a configuration of an imaging system 50 according to the first embodiment. The imaging system 50 includes the image correction device 1 that corrects an image. Details of the image correction device 1 will be described later. The imaging system 50 includes a first imaging device 21 that captures the moving imaging target 23 at a first frame rate, and a first imaging device 21 that is higher than the first frame rate when the first imaging device 21 captures the imaging target 23. And a second imaging device 22 that images the imaging target 23 at a frame rate of 2. The first imaging device 21 and the second imaging device 22 are arranged side by side. FIG. 1 also shows an imaging target 23. The frame rate means the number of frames obtained by imaging per unit time.
 第1の撮像装置21の例は、赤外線カメラである。第2の撮像装置22の例は、可視光カメラである。第1のフレームレートの例は60fpsであり、第2のフレームレートの例は1200fpsである。なお、説明の便宜上、実施の形態1では、第1のフレームレートが60fpsであって、第2のフレームレートが180fpsであることを想定する。実施の形態1では、第1の撮像装置21の画角と第2の撮像装置22の画角とは、同じである。 例 An example of the first imaging device 21 is an infrared camera. An example of the second imaging device 22 is a visible light camera. An example of the first frame rate is 60 fps, and an example of the second frame rate is 1200 fps. For convenience of explanation, in the first embodiment, it is assumed that the first frame rate is 60 fps and the second frame rate is 180 fps. In the first embodiment, the angle of view of the first imaging device 21 and the angle of view of the second imaging device 22 are the same.
 撮像対象23は、ベルトコンベア24に載置されている。図1には、ベルトコンベア24も示されている。ベルトコンベア24を構成するベルトが動作することにより、撮像対象23は、矢印25の向きに移動する。撮像対象23の例は、ホットメルト工程によって製函された段ボールである。実施の形態1では、撮像対象23が等速直線運動することを想定する。 The imaging target 23 is placed on the belt conveyor 24. FIG. 1 also shows a belt conveyor 24. When the belt constituting the belt conveyor 24 operates, the imaging target 23 moves in the direction of the arrow 25. An example of the imaging target 23 is a cardboard made by a hot melt process. In the first embodiment, it is assumed that the imaging target 23 moves linearly at a constant speed.
 図2は、実施の形態1にかかる撮像システム50が有する第1の撮像装置21及び第2の撮像装置22の各々が移動する撮像対象23を撮像することによって得られた画像の例を示す図である。具体的には、図2(A)は、第1の撮像装置21が移動する撮像対象23を撮像することによって得られた画像の例を示している。図2(B)は、第2の撮像装置22が移動する撮像対象23を撮像することによって得られた画像の例を示している。なお、図2以降の各図では、図13を除き、説明の便宜上、撮像対象23は三角形で示されている。以下では、撮像対象23は三角形であることを想定する。 FIG. 2 is a diagram illustrating an example of an image obtained by each of the first imaging device 21 and the second imaging device 22 included in the imaging system 50 according to the first embodiment capturing the moving imaging target 23. It is. Specifically, FIG. 2A shows an example of an image obtained by capturing an image of the imaging target 23 that the first imaging device 21 moves. FIG. 2B illustrates an example of an image obtained by imaging the imaging target 23 that the second imaging device 22 moves. In addition, in each figure after FIG. 2, except for FIG. 13, the imaging object 23 is shown by the triangle for convenience of explanation. In the following, it is assumed that the imaging target 23 is a triangle.
 第2の撮像装置22は、図2(B)に示す通り、撮像の最初の時刻から1/180秒後、2/180秒後及び3/180秒後の各々の時刻において、撮像対象23の画像を得る。他方、第1の撮像装置21は、図2(A)に示す通り、撮像の最初の時刻から1/60秒後、すなわち3/180秒後において、撮像対象23の画像を得る。 As shown in FIG. 2B, the second imaging device 22 controls the imaging target 23 at each time of 1/180 seconds, 2/180 seconds, and 3/180 seconds after the first time of imaging. Get an image. On the other hand, as shown in FIG. 2A, the first imaging device 21 obtains an image of the imaging target 23 1/60 seconds after the initial time of imaging, that is, 3/180 seconds.
 撮像の最初の時刻から1/60秒後、すなわち3/180秒後までの時間は、第1の撮像装置21によって1フレームが形成される時間である。つまり、図2の撮像の最初の時刻から3/180秒後の時刻は、第1の撮像装置21によって1フレームが形成される時間の最終の時刻である。すなわち、図2(A)の撮像の最初の時刻から3/180秒後の画像は、第1の撮像装置21によって1フレームが形成される時間に得られた画像である。なお、図2(A)及び図2(B)の撮像の最初の時刻に記載されている撮像対象23の画像は、第2の撮像装置22が撮像の最初の時刻に撮像対象23を撮像することによって得られた画像である。 The time from 1/60 seconds after the initial time of imaging, that is, 3/180 seconds after, is the time when one frame is formed by the first imaging device 21. That is, the time 3/180 seconds after the first time of the imaging in FIG. 2 is the last time of the time when one frame is formed by the first imaging device 21. That is, the image 3/180 seconds after the initial time of imaging in FIG. 2A is an image obtained at the time when one frame is formed by the first imaging device 21. 2A and 2B, the image of the imaging target 23 described at the first time of imaging is captured by the second imaging device 22 at the first time of imaging. This is the image obtained by this.
 上述の通り、実施の形態1では、第1のフレームレートが60fpsであって、第2のフレームレートが180fpsである。つまり、第1の撮像装置21が1枚の画像を得る際の時間は、第2の撮像装置22が1枚の画像を得る際の時間の3倍である。図1を用いて説明したように、撮像対象23は移動している。そのため、図2(A)の撮像の最初の時刻から1/60秒後の画像が示す通り、第1の撮像装置21は、撮像の最初の時刻から1/60秒後において、撮像の最初の時刻から1/60秒後の位置の撮像対象23の画像を得るだけでなく、撮像の最初の時刻から1/60秒後までの各位置の撮像対象23を示すぶれ画像23aを得る。更に言うと、第1の撮像装置21は、撮像の最初の時刻から1/60秒後において、撮像対象23の画像を得ると共に、撮像の最初の時刻から1/60秒後までに移動する撮像対象23の一部の部位の輝度値が積算されることによって生じるぶれ画像23aを得る。 As described above, in the first embodiment, the first frame rate is 60 fps, and the second frame rate is 180 fps. That is, the time when the first imaging device 21 obtains one image is three times the time when the second imaging device 22 obtains one image. As described with reference to FIG. 1, the imaging target 23 is moving. Therefore, as shown in the image 1/60 second after the initial time of imaging in FIG. 2A, the first imaging device 21 performs the first imaging 1/60 second after the initial time of imaging. In addition to obtaining an image of the imaging target 23 at a position 1/60 seconds after the time, a blurred image 23a showing the imaging target 23 at each position from the first time of the imaging to 1/60 seconds after is obtained. More specifically, the first imaging device 21 obtains an image of the imaging target 23 1/60 seconds after the first time of the imaging, and moves the imaging device 1/60 seconds after the first time of the imaging. A blurred image 23a generated by integrating the luminance values of a part of the target 23 is obtained.
 第2の撮像装置22の使用が禁止されている場合がある。上述の通り、第1の撮像装置21は、撮像の最初の時刻から1/60秒後において、撮像対象23の画像を得ると共にぶれ画像23aを得る。ぶれ画像23aは、撮像の最初の時刻から1/60秒後において第2の撮像装置22では得られない画像である。第1の撮像装置21が撮像の最初の時刻から1/60秒後に得る撮像対象23の画像は、移動する撮像対象23の一部の部位の輝度値が積算されている部分を含む。画像補正装置1は、第2の撮像装置22が使用されずに第1の撮像装置21が使用された場合、1/60秒毎に第2の撮像装置22が得る画像に類似する画像を得ることを目的とする。 使用 In some cases, use of the second imaging device 22 is prohibited. As described above, the first imaging device 21 obtains the image of the imaging target 23 and the blurred image 23a 1/60 second after the first time of the imaging. The blurred image 23a is an image that cannot be obtained by the second imaging device 22 1/60 seconds after the first time of imaging. The image of the imaging target 23 obtained by the first imaging device 21 1/60 second after the first time of the imaging includes a portion in which the luminance values of a part of the moving imaging target 23 are integrated. When the first imaging device 21 is used without using the second imaging device 22, the image correction device 1 obtains an image similar to the image obtained by the second imaging device 22 every 1/60 second. The purpose is to:
 画像補正装置1は、図2(A)の撮像の最初の時刻から1/60秒後の画像のような撮像対象23の画像とぶれ画像23aとを含む画像から撮像対象23の画像を得るための補正を行う。すなわち、画像補正装置1は、移動する撮像対象23を撮像する第1の撮像装置21によって得られた画像を補正する。 The image correction device 1 obtains an image of the imaging target 23 from an image including the blurred image 23a and the image of the imaging target 23, such as the image 1/60 seconds after the initial time of imaging in FIG. Is corrected. That is, the image correction device 1 corrects the image obtained by the first imaging device 21 that captures the moving imaging target 23.
 図3は、実施の形態1にかかる画像補正装置1の構成を示す図である。画像補正装置1は、第1の撮像装置21及び第2の撮像装置22の各々に撮像を開始させるための撮像トリガを生成するトリガ生成部2を有する。図3には、第1の撮像装置21及び第2の撮像装置22も示されている。画像補正装置1は、トリガ生成部2によって生成された撮像トリガを第1の撮像装置21及び第2の撮像装置22の各々に同時に送信する通信部3を有する。 FIG. 3 is a diagram illustrating a configuration of the image correction device 1 according to the first embodiment. The image correction device 1 includes a trigger generation unit 2 that generates an imaging trigger for causing each of the first imaging device 21 and the second imaging device 22 to start imaging. FIG. 3 also shows a first imaging device 21 and a second imaging device 22. The image correction device 1 includes a communication unit 3 that simultaneously transmits the imaging trigger generated by the trigger generation unit 2 to each of the first imaging device 21 and the second imaging device 22.
 第1の撮像装置21及び第2の撮像装置22は、撮像トリガを受信し、撮像対象23の撮像を開始する。通信部3は、第1の撮像装置21が撮像対象23を撮像した際に得られた画像データを第1の撮像装置21から受信する機能と、第2の撮像装置22が撮像対象23を撮像した際に得られた画像データを第2の撮像装置22から受信する機能とを有する。 The first imaging device 21 and the second imaging device 22 receive the imaging trigger and start imaging the imaging target 23. The communication unit 3 has a function of receiving image data obtained when the first imaging device 21 images the imaging target 23 from the first imaging device 21, and a function of the second imaging device 22 imaging the imaging target 23. And a function of receiving the image data obtained at this time from the second imaging device 22.
 画像補正装置1は、通信部3が第1の撮像装置21から受信した画像データと第2の撮像装置22から受信した画像データとを記憶する画像記憶部4を更に有する。画像補正装置1は、画像記憶部4に記憶された画像データであって通信部3が第2の撮像装置22から受信した画像データをもとにした撮像対象23の画像の特徴点を抽出する特徴抽出部5を更に有する。特徴点の例は、撮像対象23のエッジである。実施の形態1では、エッジは、三角形の撮像対象23の3個の頂点の各々である。三角形の撮像対象23の形状及び大きさは、当該3個の頂点をもとに特定される。画像補正装置1は、特徴抽出部5によって抽出された特徴点を示すデータを記憶する特徴記憶部6を更に有する。 The image correction device 1 further includes an image storage unit 4 that stores the image data received by the communication unit 3 from the first imaging device 21 and the image data received from the second imaging device 22. The image correction device 1 extracts the feature points of the image of the imaging target 23 based on the image data stored in the image storage unit 4 and the image data received by the communication unit 3 from the second imaging device 22. It further has a feature extraction unit 5. An example of a feature point is an edge of the imaging target 23. In the first embodiment, the edges are each of the three vertices of the triangular imaging target 23. The shape and size of the triangular imaging target 23 are specified based on the three vertices. The image correction device 1 further has a feature storage unit 6 that stores data indicating feature points extracted by the feature extraction unit 5.
 画像補正装置1は、特徴記憶部6に記憶されたデータが示す特徴点の単位時間当たりの移動量を演算する移動量演算部7を更に有する。具体的には、移動量演算部7は、第2の撮像装置22によって撮像された隣り合う2個のフレームにおける対応する特徴点をもとに、第2の撮像装置22によって1フレームが形成される時間における特徴点の移動量を演算する。画像補正装置1は、移動量演算部7が演算を行うことによって得られた特徴点の単位時間当たりの移動量を示すデータを記憶する移動量記憶部8を更に有する。 The image correction apparatus 1 further includes a movement amount calculation unit 7 that calculates a movement amount per unit time of the feature point indicated by the data stored in the feature storage unit 6. Specifically, the moving amount calculation unit 7 forms one frame by the second imaging device 22 based on the corresponding feature points in two adjacent frames imaged by the second imaging device 22. The movement amount of the feature point at a certain time is calculated. The image correction apparatus 1 further includes a movement amount storage unit 8 that stores data indicating the movement amount per unit time of the feature point obtained by the movement amount calculation unit 7 performing the calculation.
 画像補正装置1は、画像記憶部4に記憶された画像データであって通信部3が第1の撮像装置21から受信した画像データをもとにした画像である補正対象画像の特徴点を抽出して補正対象画像を特定する補正画像特定部9を更に有する。具体的には、補正画像特定部9は、図2(A)の撮像の最初の時刻から1/60秒後の撮像対象23の画像とぶれ画像23aとを含む台形の画像を補正対象画像と特定する。特徴記憶部6は、補正画像特定部9によって抽出された特徴点を示すデータも記憶する。更に言うと、特徴記憶部6は、補正対象画像を示すデータも記憶する。 The image correction device 1 extracts feature points of a correction target image which is image data stored in the image storage unit 4 and is an image based on the image data received by the communication unit 3 from the first imaging device 21. And a correction image specifying unit 9 for specifying a correction target image. Specifically, the corrected image specifying unit 9 sets a trapezoidal image including the image of the imaging target 23 and the blurred image 23a 1/60 seconds after the initial time of the imaging in FIG. Identify. The feature storage unit 6 also stores data indicating the feature points extracted by the corrected image specifying unit 9. More specifically, the feature storage unit 6 also stores data indicating the correction target image.
 画像補正装置1は、移動する撮像対象23を撮像する第1の撮像装置21によって1フレームが形成される時間に得られた画像である補正対象画像を、当該時間の最終の時刻に形成された画像である最終時刻画像と、当該時間の最終の時刻より前に形成された画像である非最終時刻画像とに分割する分割部10を更に有する。分割部10は、最終時刻画像及び非最終時刻画像の各々を、撮像対象23の位置をもとに、最終時刻画像及び非最終時刻画像の各部分が形成された際に要した露光時間毎の複数の領域に分割する。撮像対象23の位置は、通信部3が第2の撮像装置22から受信した画像データによって特定される。更に言うと、撮像対象23の位置は、移動量記憶部8に記憶されているデータによって特定される。 The image correction device 1 forms a correction target image, which is an image obtained at the time when one frame is formed by the first imaging device 21 that captures the moving imaging target 23, at the last time of the time. The image processing apparatus further includes a division unit that divides the image into a final time image that is an image and a non-final time image that is an image formed before the last time of the time. The dividing unit 10 separates each of the final time image and the non-final time image based on the position of the imaging target 23 for each exposure time required when each part of the final time image and the non-final time image is formed. Divide into multiple areas. The position of the imaging target 23 is specified by the image data received by the communication unit 3 from the second imaging device 22. More specifically, the position of the imaging target 23 is specified by the data stored in the movement amount storage unit 8.
 具体的には、分割部10は、特徴記憶部6に記憶されたデータをもとにした撮影された画像の撮像対象23の形状及び大きさと、移動量記憶部8に記憶されたデータをもとにした撮像対象23の位置とをもとに、補正画像特定部9によって特定された補正対象画像を、図2(A)の撮像の最初の時刻から1/60秒後の撮像対象23の画像である最終時刻画像と、ぶれ画像23aである非最終時刻画像とに分割する。 Specifically, the dividing unit 10 also stores the shape and size of the imaging target 23 of the captured image based on the data stored in the feature storage unit 6 and the data stored in the movement amount storage unit 8. The correction target image specified by the correction image specifying unit 9 based on the position of the imaging target 23 and the position of the imaging target 23 1/60 seconds after the initial time of the imaging in FIG. The image is divided into a final time image as an image and a non-final time image as a blurred image 23a.
 図4は、実施の形態1にかかる画像補正装置1が有する分割部10が有する機能を説明するための図である。更に言うと、図4は、図2(A)の撮像の最初の時刻から1/60秒後の撮像対象23の画像とぶれ画像23aとを含む補正対象画像を示す図である。上述の通り、撮像対象23は等速直線運動する。そのため、撮像の最初の時刻から1/60秒後には、図4において点P0、点Q0及び点R0を3個の頂点に持つ三角形の画像が矢印25aの向きに順に移動したような補正対象画像が第1の撮像装置21によって得られる。 FIG. 4 is a diagram for explaining a function of the dividing unit 10 included in the image correction device 1 according to the first embodiment. More specifically, FIG. 4 is a diagram illustrating a correction target image including the image of the imaging target 23 and the blurred image 23a 1/60 seconds after the first time of the imaging in FIG. 2A. As described above, the imaging target 23 linearly moves at a constant speed. Therefore, after 1/60 second from the first time of imaging, a correction target image in which a triangular image having three points P0, Q0, and R0 in FIG. 4 moves in the direction of arrow 25a in order Is obtained by the first imaging device 21.
 撮像の最初の時刻において、図4における点P0、点Q0及び点R0を3個の頂点に持つ三角形の画像が第1の撮像装置21に得られる。撮像の最初の時刻から1/180秒後には、図4における点P1、点Q1及び点R1を3個の頂点に持つ三角形の画像が第1の撮像装置21によって得られる。撮像の最初の時刻から2/180秒後には、図4における点P2、点Q2及び点R2を3個の頂点に持つ三角形の画像が第1の撮像装置21によって得られる。撮像の最初の時刻から3/180秒後である1/60秒後には、図4における点P3、点Q3及び点R3を3個の頂点に持つ三角形の画像が第1の撮像装置21によって得られる。上記の4個の各々の時刻において得られる画像が合成されて、撮像の最初の時刻から3/180秒後である1/60秒後において、点P0、点Q0、点R3及び点P3を4個の頂点に持つ台形の補正対象画像が第1の撮像装置21によって得られる。 (4) At the first time of imaging, the first imaging device 21 obtains a triangular image having three points P0, Q0, and R0 in FIG. After 1/180 second from the first time of the imaging, the first imaging device 21 obtains a triangular image having three points P1, P1 and R1 in FIG. 2/180 seconds after the first image capturing time, the first image capturing apparatus 21 obtains a triangular image having three points P2, Q2, and R2 in FIG. At 1/60 second, which is 3/180 seconds after the first time of imaging, a triangular image having three vertices of points P3, Q3, and R3 in FIG. Can be The images obtained at each of the above four times are combined, and the point P0, the point Q0, the point R3, and the point P3 are set to 4 at 1/60 second, which is 3/180 seconds after the first time of imaging. A trapezoidal correction target image having the vertices is obtained by the first imaging device 21.
 図4において、「y0」は、撮像の最初の時刻から1/60秒後までの1/60秒間を3等分したときの各1/180秒間での上記の三角形の画像の移動量を示している。当該移動量を示すデータは、移動量記憶部8に記憶されている。撮像対象23が等速直線運動するので、三角形の画像の1/180秒毎の移動量y0は一定である。 In FIG. 4, “y0” indicates the amount of movement of the triangular image in each 1/180 second when 1/60 second from 1/60 second after the first time of imaging is divided into three equal parts. ing. Data indicating the movement amount is stored in the movement amount storage unit 8. Since the imaging target 23 moves linearly at a constant speed, the movement amount y0 of the triangular image every 1/180 second is constant.
 上述の通り、撮像の最初の時刻から3/180秒後である1/60秒後には、図4において点P3、点Q3及び点R3を3個の頂点に持つ三角形の画像が第1の撮像装置21によって撮像される。点P3、点Q3及び点R3を3個の頂点に持つ三角形の画像を構成する4個の領域には、符号Bが割り当てられている。符号Bが割り当てられているすべての領域によって構成されている画像は、補正対象画像が形成される時間の最終の時刻に形成された最終時刻画像である。ただし、符号Bが割り当てられている4個の領域の各々が形成された際に要した露光時間は、他の領域が形成された際に要した露光時間と異なる。図4において、符号Aが割り当てられているすべての領域によって構成されている画像は、露光時間の最終の時刻より前に形成された非最終時刻画像である。 As described above, after 1/60 seconds, which is 3/180 seconds after the first time of imaging, a triangular image having three vertices of point P3, point Q3, and point R3 in FIG. The image is taken by the device 21. A code B is assigned to four regions forming a triangular image having three points P3, Q3, and R3. The image constituted by all the regions to which the reference symbol B is assigned is the last time image formed at the last time of the time when the correction target image is formed. However, the exposure time required for forming each of the four regions to which the code B is assigned is different from the exposure time required for forming the other regions. In FIG. 4, the image constituted by all the regions to which the reference symbol A is assigned is a non-final time image formed before the last time of the exposure time.
 撮像の最初の時刻の三角形の画像の一部は、撮像の最初の時刻から1/180秒後の三角形の画像、撮像の最初の時刻から2/180秒後の三角形の画像、及び、撮像の最初の時刻から3/180秒後の三角形の画像と重複する。撮像の最初の時刻から1/180秒後の三角形の画像の一部は、撮像の最初の時刻から2/180秒後の三角形の画像、及び、撮像の最初の時刻から3/180秒後の三角形の画像と重複する。撮像の最初の時刻から2/180秒後の三角形の画像の一部は、撮像の最初の時刻から3/180秒後の三角形の画像と重複する。 A part of the triangle image at the first time of imaging is a triangle image 1/180 second after the first time of imaging, a triangle image two / 180 second after the first time of imaging, and It overlaps with the image of the triangle 3/180 seconds after the first time. A part of the triangular image 1/180 second after the initial time of imaging is a triangle image 2/180 second after the initial time of imaging and 3/180 second after the initial time of imaging. Overlap with the triangle image. A part of the triangular image 2/180 seconds after the first image capturing time overlaps with the triangular image 3/180 seconds after the first image capturing time.
 図4では、符号A又は符号Bの右側に付加されている数字は、撮像の最初の時刻から1/60秒後までの1/60秒間を3等分したときの各1/180秒間での上記の三角形の画像の一部が重複した回数に「1」を加えたものである。重複した回数は、露光時間に対応する。具体的には、符号A1が割り当てられている領域が形成された際に要した露光時間は、符号A2が割り当てられている領域が形成された際に要した露光時間より短い。符号A2が割り当てられている領域が形成された際に要した露光時間は、符号A3が割り当てられている領域が形成された際に要した露光時間より短い。 In FIG. 4, the numbers added to the right side of the code A or the code B indicate the 1/60 second from 1/60 second after the first time of the imaging to be divided into three equal parts in each 1/180 second. This is obtained by adding “1” to the number of times a part of the above triangular image overlaps. The number of overlaps corresponds to the exposure time. Specifically, the exposure time required when the area to which the code A1 is assigned is formed is shorter than the exposure time required when the area to which the code A2 is assigned is formed. The exposure time required when the area to which the code A2 is assigned is formed is shorter than the exposure time required when the area to which the code A3 is assigned is formed.
 符号B1が割り当てられている領域が形成された際に要した露光時間は、符号B2が割り当てられている領域が形成された際に要した露光時間より短い。符号B2が割り当てられている領域が形成された際に要した露光時間は、符号B3が割り当てられている領域が形成された際に要した露光時間より短い。符号B3が割り当てられている領域が形成された際に要した露光時間は、符号B4が割り当てられている領域が形成された際に要した露光時間より短い。 The exposure time required when the area to which the code B1 is assigned is formed is shorter than the exposure time required when the area to which the code B2 is assigned is formed. The exposure time required when the area to which the code B2 is assigned is formed is shorter than the exposure time required when the area to which the code B3 is assigned is formed. The exposure time required when the area to which the code B3 is assigned is formed is shorter than the exposure time required when the area to which the code B4 is assigned is formed.
 分割部10は、最終時刻画像及び非最終時刻画像の各々を、撮像対象23の位置をもとに、最終時刻画像及び非最終時刻画像の各部分が形成された際に要した露光時間毎の複数の領域に分割する。撮像対象23の位置は、通信部3が第2の撮像装置22から受信した画像データによって特定される。具体的には、撮像対象23の位置は、移動量記憶部8に記憶されているデータによって特定される。 The dividing unit 10 separates each of the final time image and the non-final time image based on the position of the imaging target 23 for each exposure time required when each part of the final time image and the non-final time image is formed. Divide into multiple areas. The position of the imaging target 23 is specified by the image data received by the communication unit 3 from the second imaging device 22. Specifically, the position of the imaging target 23 is specified by data stored in the movement amount storage unit 8.
 すなわち、図4に示すように、分割部10は、最終時刻画像を、符号B1が割り当てられている領域と、符号B2が割り当てられている領域と、符号B3が割り当てられている領域と、符号B4が割り当てられている領域とに分割する。加えて、分割部10は、非最終時刻画像を、符号A1が割り当てられている領域と、符号A2が割り当てられている領域と、符号A3が割り当てられている領域とに分割する。 That is, as shown in FIG. 4, the dividing unit 10 divides the last time image into an area to which the code B1 is assigned, an area to which the code B2 is assigned, an area to which the code B3 is assigned, and The area is divided into areas to which B4 is assigned. In addition, the dividing unit 10 divides the non-final time image into an area to which the code A1 is assigned, an area to which the code A2 is assigned, and an area to which the code A3 is assigned.
 図3に戻る。画像補正装置1は、画像を補正する際に用いられる第1の補正値及び第2の補正値を記憶する補正値記憶部11を更に有する。図5は、実施の形態1にかかる画像補正装置1が有する補正値記憶部11に記憶されている第1の補正値を説明するための図である。第1の補正値は、補正対象の領域が形成された際に要した露光時間に対応している。図5は、第1の補正値が露光時間に比例して小さくなる例を示している。 戻 る Return to FIG. The image correction device 1 further includes a correction value storage unit 11 that stores a first correction value and a second correction value used when correcting an image. FIG. 5 is a diagram for explaining the first correction value stored in the correction value storage unit 11 included in the image correction device 1 according to the first embodiment. The first correction value corresponds to the exposure time required when the region to be corrected is formed. FIG. 5 shows an example in which the first correction value decreases in proportion to the exposure time.
 図5は、α1、α2、α3及びα4の4個の第1の補正値を示している。α1は、符号A1及び符号B1が割り当てられている領域の輝度値を補正する際に用いられる第1の補正値である。α2は、符号A2及び符号B2が割り当てられている領域の輝度値を補正する際に用いられる第1の補正値である。α3は、符号A3及び符号B3が割り当てられている領域の輝度値を補正する際に用いられる第1の補正値である。α4は、符号B4が割り当てられている領域の輝度値を補正する際に用いられる第1の補正値である。 FIG. 5 shows four first correction values α1, α2, α3, and α4. α1 is a first correction value used when correcting the luminance value of the area to which the code A1 and the code B1 are assigned. α2 is a first correction value used when correcting the luminance value of the area to which the code A2 and the code B2 are assigned. α3 is a first correction value used when correcting the luminance value of the area to which the code A3 and the code B3 are assigned. α4 is a first correction value used when correcting the luminance value of the area to which the code B4 is assigned.
 図3に戻る。画像補正装置1は、分割部10によって得られた最終時刻画像及び非最終時刻画像に含まれている複数の領域の各々の輝度値を補正する補正部12を更に有する。補正部12は、最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、補正対象の領域が形成された際に要した露光時間に対応する第1の補正値を用いて補正する。第1の補正値は、補正値記憶部11に記憶されている。 戻 る Return to FIG. The image correction device 1 further includes a correction unit 12 that corrects the luminance value of each of the plurality of regions included in the final time image and the non-final time image obtained by the division unit 10. For a plurality of areas included in the final time image, the correction unit 12 calculates the brightness value of the correction target area by a first correction value corresponding to the exposure time required when the correction target area was formed. Correct using. The first correction value is stored in the correction value storage unit 11.
 補正対象の領域が、符号B1が割り当てられている領域である場合、補正部12は、符号B1が割り当てられている領域の輝度値を、符号B1が割り当てられている領域が形成された際に要した露光時間に対応する第1の補正値α1を用いて補正する。補正対象の領域が、符号B2が割り当てられている領域である場合、補正部12は、符号B2が割り当てられている領域の輝度値を、符号B2が割り当てられている領域が形成された際に要した露光時間に対応する第1の補正値α2を用いて補正する。 When the area to be corrected is the area to which the code B1 is assigned, the correction unit 12 determines the luminance value of the area to which the code B1 is assigned when the area to which the code B1 is assigned is formed. The correction is performed using the first correction value α1 corresponding to the required exposure time. When the area to be corrected is the area to which the code B2 is assigned, the correction unit 12 determines the luminance value of the area to which the code B2 is assigned when the area to which the code B2 is assigned is formed. The correction is performed using the first correction value α2 corresponding to the required exposure time.
 補正部12は、非最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、補正対象の領域が形成された際に要した露光時間に対応する第1の補正値と、非最終時刻画像に対応する第2の補正値とを用いて補正する。第1の補正値及び第2の補正値は、補正値記憶部11に記憶されている。例えば、第2の補正値は、1より小さい値である。 For a plurality of areas included in the non-final time image, the correction unit 12 determines the luminance value of the correction target area by a first correction corresponding to the exposure time required when the correction target area was formed. The correction is performed using the value and the second correction value corresponding to the non-final time image. The first correction value and the second correction value are stored in the correction value storage unit 11. For example, the second correction value is a value smaller than one.
 補正対象の領域が、符号A1が割り当てられている領域である場合、補正部12は、符号A1が割り当てられている領域の輝度値を、符号A1が割り当てられている領域が形成された際に要した露光時間に対応する第1の補正値α1と、非最終時刻画像に対応する第2の補正値とを用いて補正する。補正対象の領域が、符号A2が割り当てられている領域である場合、補正部12は、符号A2が割り当てられている領域の輝度値を、符号A2が割り当てられている領域が形成された際に要した露光時間に対応する第1の補正値α2と、非最終時刻画像に対応する第2の補正値とを用いて補正する。 When the area to be corrected is the area to which the code A1 is assigned, the correction unit 12 determines the luminance value of the area to which the code A1 is assigned when the area to which the code A1 is assigned is formed. The correction is performed using the first correction value α1 corresponding to the required exposure time and the second correction value corresponding to the non-final time image. When the area to be corrected is the area to which the code A2 is assigned, the correction unit 12 determines the luminance value of the area to which the code A2 is assigned when the area to which the code A2 is assigned is formed. The correction is performed using the first correction value α2 corresponding to the required exposure time and the second correction value corresponding to the non-final time image.
 更に言うと、符号Xnが割り当てられている領域の補正前の輝度値がXnであって、符号Xnが割り当てられている領域の補正後の輝度値がXn’である場合、例えば、補正部12は、下記の式(1)にしたがって補正前の輝度値を補正する。なお、XはA又はBであり、nは1から4までのいずれかの整数である。下記の式において、β0は第2の補正値である。
  A1’=α1×β0×A1
  A2’=α2×β0×A2
  A3’=α3×β0×A3
  B1’=α1×B1
  B2’=α2×B2
  B3’=α3×B3
  B4’=α4×B4     ・・・(1)
More specifically, if the luminance value before correction of the area to which the code Xn is assigned is Xn and the corrected luminance value of the area to which the code Xn is assigned is Xn ', for example, the correction unit 12 Corrects the luminance value before correction according to the following equation (1). Here, X is A or B, and n is any integer from 1 to 4. In the following equation, β0 is a second correction value.
A1 ′ = α1 × β0 × A1
A2 '= α2 × β0 × A2
A3 ′ = α3 × β0 × A3
B1 ′ = α1 × B1
B2 ′ = α2 × B2
B3 ′ = α3 × B3
B4 ′ = α4 × B4 (1)
 画像補正装置1は、補正部12によって得られた輝度値をもとに、補正後の画像を生成する補正画像生成部13を更に有する。画像補正装置1は、補正画像生成部13によって生成された補正後の画像を表示する表示部14を更に有する。表示部14の例は、液晶表示装置である。 The image correction device 1 further includes a corrected image generation unit 13 that generates a corrected image based on the luminance value obtained by the correction unit 12. The image correction device 1 further includes a display unit 14 that displays the corrected image generated by the corrected image generation unit 13. An example of the display unit 14 is a liquid crystal display device.
 画像補正装置1が有する画像記憶部4、特徴記憶部6、移動量記憶部8及び補正値記憶部11の例は、半導体メモリである。 An example of the image storage unit 4, the feature storage unit 6, the movement amount storage unit 8, and the correction value storage unit 11 included in the image correction device 1 is a semiconductor memory.
 図6は、実施の形態1にかかる画像補正装置1の動作の手順を示すフローチャートである。トリガ生成部2は、第1の撮像装置21及び第2の撮像装置22の各々に撮像を開始させるための撮像トリガを生成する。通信部3は、トリガ生成部2によって生成された撮像トリガを第1の撮像装置21及び第2の撮像装置22の各々に同時に送信する(S1)。通信部3は、第1の撮像装置21が撮像対象23を撮像した際に得た画像データを第1の撮像装置21から受信し、第2の撮像装置22が撮像対象23を撮像した際に得た画像データを第2の撮像装置22から受信する(S2)。 FIG. 6 is a flowchart of an operation procedure of the image correction apparatus 1 according to the first embodiment. The trigger generation unit 2 generates an imaging trigger for causing each of the first imaging device 21 and the second imaging device 22 to start imaging. The communication unit 3 simultaneously transmits the imaging trigger generated by the trigger generation unit 2 to each of the first imaging device 21 and the second imaging device 22 (S1). The communication unit 3 receives image data obtained when the first imaging device 21 has imaged the imaging target 23 from the first imaging device 21, and when the second imaging device 22 has captured the imaging target 23 The obtained image data is received from the second imaging device 22 (S2).
 特徴抽出部5は、通信部3が第2の撮像装置22から受信した画像データをもとにした画像である撮像対象23の画像の特徴点を抽出する(S3)。移動量演算部7は、特徴抽出部5によって抽出された特徴点の単位時間当たりの移動量を演算する(S4)。補正画像特定部9は、通信部3が第1の撮像装置21から受信した画像データをもとにした画像の特徴点を抽出して補正対象画像を特定する(S5)。 The feature extracting unit 5 extracts feature points of the image of the imaging target 23 which is an image based on the image data received by the communication unit 3 from the second imaging device 22 (S3). The movement amount calculation unit 7 calculates the movement amount per unit time of the feature point extracted by the feature extraction unit 5 (S4). The correction image specifying unit 9 specifies a correction target image by extracting feature points of an image based on the image data received by the communication unit 3 from the first imaging device 21 (S5).
 分割部10は、移動する撮像対象23を撮像する第1の撮像装置21によって得られた画像である補正対象画像を、補正対象画像が形成される時間の最終の時刻に形成された画像である最終時刻画像と、露光時間の最終の時刻より前に形成された画像である非最終時刻画像とに分割する(S6)。分割部10は、最終時刻画像及び非最終時刻画像の各々を、撮像対象23の位置をもとに、最終時刻画像及び非最終時刻画像の各部分が形成された際に要した露光時間毎の複数の領域に分割する(S6)。 The division unit 10 converts the correction target image, which is an image obtained by the first imaging device 21 that captures the moving imaging target 23, into an image formed at the last time of the time when the correction target image is formed. The image is divided into a final time image and a non-final time image which is an image formed before the final time of the exposure time (S6). The dividing unit 10 separates each of the final time image and the non-final time image based on the position of the imaging target 23 for each exposure time required when each part of the final time image and the non-final time image is formed. It is divided into a plurality of areas (S6).
 補正部12は、分割部10によって得られた最終時刻画像及び非最終時刻画像に含まれている複数の領域の各々の輝度値を補正する。つまり、補正部12は、補正対象画像を補正する(S7)。補正画像生成部13は、補正部12によって得られた輝度値をもとに、補正後の画像である補正画像を生成する(S8)。表示部14は、補正画像生成部13によって生成された補正後の画像である補正画像を表示する(S9)。 The correction unit 12 corrects each luminance value of a plurality of regions included in the final time image and the non-final time image obtained by the division unit 10. That is, the correction unit 12 corrects the correction target image (S7). The corrected image generation unit 13 generates a corrected image, which is a corrected image, based on the luminance values obtained by the correction unit 12 (S8). The display unit 14 displays the corrected image generated by the corrected image generation unit 13 as the corrected image (S9).
 上述の通り、画像補正装置1は、移動する撮像対象23を撮像する第1の撮像装置21によって1フレームが形成される時間に得られた画像である補正対象画像を、当該時間の最終の時刻に形成された画像である最終時刻画像と、当該時間の最終の時刻より前に形成された画像である非最終時刻画像とに分割する。画像補正装置1は、最終時刻画像及び非最終時刻画像の各々を、撮像対象23の位置をもとに、最終時刻画像及び非最終時刻画像の各部分が形成された際に要した露光時間毎の複数の領域に分割する。 As described above, the image correction device 1 sets the correction target image, which is an image obtained at the time when one frame is formed by the first imaging device 21 that captures the moving imaging target 23, at the final time of the time. Is divided into a final time image, which is an image formed at the time, and a non-final time image, which is an image formed before the last time of the time. The image correction device 1 converts each of the final time image and the non-final time image based on the position of the imaging target 23 for each exposure time required when each part of the final time image and the non-final time image is formed. Is divided into a plurality of regions.
 画像補正装置1は、最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、補正対象の領域が形成された際に要した露光時間に対応する第1の補正値を用いて補正する。画像補正装置1は、非最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、補正対象の領域が形成された際に要した露光時間に対応する第1の補正値と、非最終時刻画像に対応する第2の補正値とを用いて補正する。 For a plurality of areas included in the final time image, the image correction apparatus 1 converts the brightness value of the correction target area into a first correction corresponding to the exposure time required when the correction target area is formed. Correct using the value. For a plurality of regions included in the non-final time image, the image correction device 1 sets the brightness value of the correction target region to a first value corresponding to the exposure time required when the correction target region was formed. The correction is performed using the correction value and the second correction value corresponding to the non-final time image.
 図7は、実施の形態1にかかる画像補正装置1が有する補正部12が補正を行うことによって得た輝度値の分布を模式的に示す図である。図7は、符号B1、符号B2、符号B3及び符号B4が割り当てられている領域の輝度値が同じであることを示している。加えて、図7は、符号A1、符号A2及び符号A3が割り当てられている領域の輝度値が、符号B1、符号B2、符号B3及び符号B4が割り当てられている領域の輝度値より小さく、かつ同じであることを示している。 FIG. 7 is a diagram schematically illustrating a distribution of luminance values obtained by performing correction by the correction unit 12 included in the image correction apparatus 1 according to the first embodiment. FIG. 7 shows that the luminance values of the areas to which the codes B1, B2, B3, and B4 are assigned are the same. In addition, FIG. 7 shows that the brightness values of the areas to which the codes A1, A2, and A3 are assigned are smaller than the brightness values of the areas to which the codes B1, B2, B3, and B4 are assigned, and It is the same.
 図8は、実施の形態1にかかる画像補正装置1が行う補正を説明するための図である。図8(A)は、図2(A)と同じであって、撮像の最終の時刻において、補正対象画像を示している。図8(B)は、画像補正装置1が、撮像の最終の時刻において、補正対象画像を補正することによって得た画像を示している。図8(B)の撮像の最終の時刻において示されている画像は、図7の輝度値の分布をもとに得られた画像である。図8(C)は、図2(B)と同じであって、撮像の最初の時刻と、撮像の最初から1/180秒後の時刻と、撮像の最初から2/180秒後の時刻と、撮像の最初から3/180秒後の時刻との各々において第2の撮像装置22によって得られた画像を示している。 FIG. 8 is a diagram for explaining correction performed by the image correction apparatus 1 according to the first embodiment. FIG. 8A is the same as FIG. 2A, and shows the correction target image at the final time of imaging. FIG. 8B illustrates an image obtained by the image correction device 1 correcting the correction target image at the final time of imaging. The image shown at the final time of imaging in FIG. 8B is an image obtained based on the distribution of luminance values in FIG. FIG. 8 (C) is the same as FIG. 2 (B), and shows the first time of imaging, the time 1/180 second after the beginning of imaging, and the time 2/180 second after the beginning of imaging. , And an image obtained by the second imaging device 22 at a time 3/180 seconds after the start of imaging.
 図8(B)が示す通り、実施の形態1にかかる画像補正装置1は、撮像の最初から3/180秒後の時刻において、ぶれ画像23aを含まない撮像対象23の画像を得ることができる。つまり、画像補正装置1は、移動する撮像対象が撮影された画像に生じるぶれであって輝度値の積算によって生じるぶれを適切に補正することができる。更に言うと、画像補正装置1は、第2の撮像装置22を使用することが禁止されている場合に第2の撮像装置22が使用されることなく第1の撮像装置21が使用されたとき、1/60秒毎に第2の撮像装置22が得る画像に類似する画像を得ることができる。 As shown in FIG. 8B, the image correction apparatus 1 according to the first embodiment can obtain an image of the imaging target 23 that does not include the blurred image 23a at a time 3/180 seconds after the start of imaging. . In other words, the image correction apparatus 1 can appropriately correct a blur that occurs in a captured image of a moving imaging target and is caused by the integration of luminance values. More specifically, when the use of the second imaging device 22 is prohibited and the first imaging device 21 is used without using the second imaging device 22, , Every 1/60 second, an image similar to the image obtained by the second imaging device 22 can be obtained.
 加えて、図7から理解することができるように、画像補正装置1は、露光時間を考慮して、第1の撮像装置21によって1フレームが形成された後の輝度値が補正された撮像対象23の画像を得ることができる。すなわち、第1の撮像装置21がホットメルト工程によって製函された段ボールが適切に製函されているか否か判断するために用いられる赤外線カメラである場合、画像補正装置1は、ホットメルト工程によって製函された段ボールが適切に製函されているか否かを判断するための画像を生成することができる。画像補正装置1は、画素毎の補正を行わず、複数の画素がまとまった領域毎に補正を行うので、補正を比較的速やかに行うことができる。 In addition, as can be understood from FIG. 7, the image correction device 1 is configured to correct the luminance value after the first imaging device 21 has formed one frame in consideration of the exposure time. 23 images can be obtained. That is, when the first imaging device 21 is an infrared camera used to determine whether or not the corrugated cardboard produced by the hot melt process is properly produced, the image correction device 1 performs the hot melt process. It is possible to generate an image for determining whether or not the boxed cardboard is properly boxed. Since the image correction device 1 does not perform correction for each pixel but performs correction for each region where a plurality of pixels are collected, the correction can be performed relatively quickly.
 なお、撮影された画像の撮像対象23の形状及び大きさを示すデータは、特徴記憶部6にあらかじめ記憶されていてもよい。更に言うと、第2の撮像装置22によって撮影された画像の撮像対象23の形状及び大きさを示すデータは、特徴記憶部6にあらかじめ記憶されていてもよい。 The data indicating the shape and the size of the imaging target 23 of the captured image may be stored in the feature storage unit 6 in advance. More specifically, data indicating the shape and size of the imaging target 23 of the image captured by the second imaging device 22 may be stored in the feature storage unit 6 in advance.
実施の形態2.
 次に、実施の形態2にかかる画像補正装置1を説明する。実施の形態1では、撮像対象23が等速直線運動することを想定した。実施の形態2では、撮像対象23が加速度運動することを想定する。実施の形態2にかかる画像補正装置1の構成は、実施の形態1にかかる画像補正装置1の構成と同じである。実施の形態2では、実施の形態1と相違する部分について説明する。
Embodiment 2 FIG.
Next, an image correction device 1 according to the second embodiment will be described. In the first embodiment, it is assumed that the imaging target 23 moves linearly at a constant speed. In the second embodiment, it is assumed that the imaging target 23 performs an acceleration motion. The configuration of the image correction device 1 according to the second embodiment is the same as the configuration of the image correction device 1 according to the first embodiment. In the second embodiment, portions different from the first embodiment will be described.
 実施の形態2では、分割部10は、非最終時刻画像を、非最終時刻画像の各部分が形成された際の露光時間と撮像対象23の加速度とに対応する複数の領域に分割する。図9は、実施の形態2にかかる画像補正装置1が有する分割部10が有する機能を説明するための図である。撮像の最初の時刻において、図9における点P0、点Q0及び点R0を3個の頂点に持つ三角形の画像が第1の撮像装置21によって得られる。撮像の最初の時刻から1/180秒後には、図9における点P1、点Q1及び点R1を3個の頂点に持つ三角形の画像が第1の撮像装置21によって得られる。 In the second embodiment, the dividing unit 10 divides the non-final time image into a plurality of regions corresponding to the exposure time when each part of the non-final time image is formed and the acceleration of the imaging target 23. FIG. 9 is a diagram for explaining functions of the dividing unit 10 included in the image correction device 1 according to the second embodiment. At the first time of the imaging, the first imaging device 21 obtains a triangular image having three points P0, Q0, and R0 in FIG. After 1/180 second from the first time of the imaging, the first imaging device 21 obtains a triangular image having three vertices of the points P1, Q1 and R1 in FIG.
 撮像の最初の時刻から2/180秒後には、図9における点P2、点Q2及び点R2を3個の頂点に持つ三角形の画像が第1の撮像装置21によって得られる。撮像の最初の時刻から3/180秒後である1/60秒後には、図9における点P3、点Q3及び点R3を3個の頂点に持つ三角形の画像が第1の撮像装置21によって得られる。上記の4個の各々の時刻において得られる画像が合成されて、撮像の最初の時刻から3/180秒後である1/60秒後において、点P0、点Q0、点R3及び点P3を4個の頂点に持つ台形の補正対象画像が第1の撮像装置21によって得られる。 {Circle around (2)} / 180 seconds from the first time of imaging, the first imaging device 21 obtains a triangular image having three vertices of points P2, Q2, and R2 in FIG. At 1/60 second, which is 3/180 seconds after the first time of imaging, a triangular image having three vertices of point P3, point Q3 and point R3 in FIG. Can be The images obtained at each of the above four times are combined, and the point P0, the point Q0, the point R3, and the point P3 are set to 4 at 1/60 second, which is 3/180 seconds after the first time of imaging. A trapezoidal correction target image having the vertices is obtained by the first imaging device 21.
 図9では、上記の三角形の画像の移動量は、撮像の最初の時刻から1/180秒後までの間においてy1であり、1/180秒後から2/180秒後までの間においてy2であり、2/180秒後から3/180秒後までの間においてy3である。実施の形態2では、撮像対象23が加速度運動する。そのため、移動量y1、移動量y2及び移動量y3の各々は、他の二つの移動量と異なる。移動量y1、移動量y2及び移動量y3の各々は、撮像対象23の加速度に対応している。 In FIG. 9, the movement amount of the above triangular image is y1 from the first time of imaging to 1/180 second, and y2 from 1/180 second to 2/180 second. And y3 from 2/180 seconds to 3/180 seconds. In the second embodiment, the imaging target 23 performs an acceleration motion. Therefore, each of the movement amounts y1, y2, and y3 is different from the other two movement amounts. Each of the movement amount y1, the movement amount y2, and the movement amount y3 corresponds to the acceleration of the imaging target 23.
 図9において、点P3、点Q3及び点R3を3個の頂点に持つ三角形の画像を構成する4個の領域には、実施の形態1と同様に、符号Bが割り当てられている。符号Bが割り当てられているすべての領域によって構成されている画像は、補正対象画像が形成される時間の最終の時刻に形成された最終時刻画像である。図9において、符号Aが割り当てられているすべての領域によって構成されている画像は、補正対象画像が形成される時間の最終の時刻より前に形成された非最終時刻画像である。 に お い て In FIG. 9, the code B is assigned to the four regions constituting the triangular image having the points P3, Q3 and R3 at the three vertices, as in the first embodiment. The image constituted by all the areas to which the reference symbol B is assigned is the last time image formed at the last time of the time when the correction target image is formed. In FIG. 9, the image constituted by all the regions to which the code A is assigned is a non-final time image formed before the last time of the time when the correction target image is formed.
 上述の通り、分割部10は、非最終時刻画像を、非最終時刻画像の各部分が形成された際の露光時間と撮像対象の加速度とに対応する複数の領域に分割する。つまり、分割部10は、移動量y1、移動量y2及び移動量y3の各々が撮像対象23の加速度に対応しているので、非最終時刻画像を、符号A4から符号A9までのいずれか1個が割り当てられている6個の領域に分割する。図10は、実施の形態2にかかる画像補正装置1が有する補正値記憶部11に記憶されている第2の補正値を説明するための図である。実施の形態2の第2の補正値は、非最終時刻画像に対応すると共に、撮像対象23の加速度によって異なる。図10は、第2の補正値が、1/180秒当たりの三角形の画像の移動量が大きくなるにつれて小さくなる例を示している。 As described above, the dividing unit 10 divides the non-final time image into a plurality of regions corresponding to the exposure time and the acceleration of the imaging target when each part of the non-final time image is formed. That is, since the moving amount y1, the moving amount y2, and the moving amount y3 each correspond to the acceleration of the imaging target 23, the dividing unit 10 sets the non-final time image to any one of the codes A4 to A9. Are divided into six areas to which are assigned. FIG. 10 is a diagram for explaining the second correction value stored in the correction value storage unit 11 included in the image correction device 1 according to the second embodiment. The second correction value according to the second embodiment corresponds to the non-final time image and differs depending on the acceleration of the imaging target 23. FIG. 10 shows an example in which the second correction value decreases as the moving amount of the triangle image per 1/180 second increases.
 1/180秒当たりの三角形の画像の移動量がy1である場合の第2の補正値は、β1である。1/180秒当たりの三角形の画像の移動量がy2である場合の第2の補正値は、β2である。1/180秒当たりの三角形の画像の移動量がy3である場合の第2の補正値は、β3である。 The second correction value when the amount of movement of the triangle image per 1/180 second is y1 is β1. The second correction value when the moving amount of the triangle image per 1/180 second is y2 is β2. The second correction value when the moving amount of the triangle image per 1/180 second is y3 is β3.
 符号A4が割り当てられている領域について、第1の補正値はα1であり、第2の補正値はβ1である。符号A5が割り当てられている領域について、第1の補正値はα1であり、第2の補正値はβ2である。符号A6が割り当てられている領域について、第1の補正値はα1であり、第2の補正値はβ3である。符号A7が割り当てられている領域について、第1の補正値はα2であり、第2の補正値はβ2である。符号A8が割り当てられている領域について、第1の補正値はα2であり、第2の補正値はβ3である。符号A9が割り当てられている領域について、第1の補正値はα3であり、第2の補正値はβ3である。 For the region to which the symbol A4 is assigned, the first correction value is α1, and the second correction value is β1. The first correction value is α1 and the second correction value is β2 for the area to which the reference symbol A5 is assigned. The first correction value is α1, and the second correction value is β3, for the area to which the code A6 is assigned. The first correction value is α2 and the second correction value is β2 for the area to which the code A7 is assigned. The first correction value is α2 and the second correction value is β3 for the area to which the code A8 is assigned. The first correction value is α3 and the second correction value is β3 for the area to which the code A9 is assigned.
 補正部12は、非最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、補正対象の領域が形成された際に要した露光時間に対応する第1の補正値と補正対象の領域が形成された際の撮像対象23の加速度に対応する第2の補正値とを用いて補正する。補正部12は、最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、実施の形態1において説明した方法の通りに補正する。 For a plurality of areas included in the non-final time image, the correction unit 12 determines the luminance value of the correction target area by a first correction corresponding to the exposure time required when the correction target area was formed. The correction is performed using the value and the second correction value corresponding to the acceleration of the imaging target 23 when the region to be corrected is formed. The correction unit 12 corrects the luminance value of the correction target area for the plurality of areas included in the final time image in the same manner as described in the first embodiment.
 符号Xnが割り当てられている領域の補正前の輝度値がXnであって、符号Xnが割り当てられている領域の補正後の輝度値がXn’である場合、例えば、補正部12は、下記の式(2)にしたがって補正前の輝度値を補正する。なお、XはA又はBであり、nは1から9までのいずれかの整数である。
  A4’=α1×β1×A4
  A5’=α1×β2×A5
  A6’=α1×β3×A6
  A7’=α2×β2×A7
  A8’=α2×β3×A8
  A9’=α3×β3×A9
  B1’=α1×B1
  B2’=α2×B2
  B3’=α3×B3
  B4’=α4×B4     ・・・(2)
When the brightness value before correction of the area to which the code Xn is assigned is Xn and the brightness value after correction of the area to which the code Xn is assigned is Xn ′, for example, the correction unit 12 The luminance value before correction is corrected according to equation (2). Note that X is A or B, and n is any integer from 1 to 9.
A4 '= α1 × β1 × A4
A5 ′ = α1 × β2 × A5
A6 '= α1 × β3 × A6
A7 '= α2 × β2 × A7
A8 '= α2 × β3 × A8
A9 '= α3 × β3 × A9
B1 ′ = α1 × B1
B2 ′ = α2 × B2
B3 ′ = α3 × B3
B4 ′ = α4 × B4 (2)
 上述の通り、実施の形態2にかかる画像補正装置1は、非最終時刻画像を、非最終時刻画像の各部分が形成された際の露光時間と撮像対象23の加速度とに対応する複数の領域に分割する。実施の形態2の第2の補正値は、非最終時刻画像に対応すると共に、撮像対象23の加速度によって異なる。補正部12は、非最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、補正対象の領域が形成された際に要した露光時間に対応する第1の補正値と補正対象の領域が形成された際の撮像対象23の加速度に対応する第2の補正値とを用いて補正する。補正部12は、最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、実施の形態1において説明した方法の通りに補正する。 As described above, the image correction apparatus 1 according to the second embodiment converts the non-final time image into a plurality of areas corresponding to the exposure time when each part of the non-final time image is formed and the acceleration of the imaging target 23. Divided into The second correction value according to the second embodiment corresponds to the non-final time image and differs depending on the acceleration of the imaging target 23. For a plurality of areas included in the non-final time image, the correction unit 12 determines the luminance value of the correction target area by a first correction corresponding to the exposure time required when the correction target area was formed. The correction is performed using the value and the second correction value corresponding to the acceleration of the imaging target 23 when the region to be corrected is formed. The correction unit 12 corrects the luminance value of the correction target area for the plurality of areas included in the final time image in the same manner as described in the first embodiment.
 すなわち、実施の形態2にかかる画像補正装置1は、撮像対象23が加速度運動しても、実施の形態1と同様に、補正対象画像を補正することによって、撮像の最終の時刻において、ぶれ画像23aを含まない撮像対象23の画像を得ることができる。更に言うと、実施の形態2にかかる画像補正装置1は、露光時間を考慮して、第1の撮像装置21によって1フレームが形成された後の輝度値が補正された撮像対象23の画像を得ることができる。 That is, similarly to the first embodiment, the image correction apparatus 1 according to the second embodiment corrects the correction target image even when the imaging target 23 performs an acceleration motion, so that the blurred image is obtained at the final time of the imaging. An image of the imaging target 23 that does not include 23a can be obtained. More specifically, the image correction apparatus 1 according to the second embodiment converts an image of the imaging target 23 whose luminance value after one frame has been formed by the first imaging apparatus 21 in consideration of the exposure time. Obtainable.
 なお、実施の形態2にかかる画像補正装置1は、撮像対象23が加速度運動しても、撮像対象23の加速度を考慮することなく、実施の形態1において説明した方法によって補正対象画像を補正してもよい。その場合であっても、画像補正装置1は、従来よりぶれ画像23aの影響が小さく、かつ露光時間が考慮された補正画像を生成することができる。 Note that the image correction apparatus 1 according to the second embodiment corrects the correction target image by the method described in the first embodiment without considering the acceleration of the imaging target 23 even when the imaging target 23 performs an acceleration motion. You may. Even in such a case, the image correction apparatus 1 can generate a corrected image in which the influence of the blurred image 23a is smaller than before and the exposure time is considered.
実施の形態3.
 次に、実施の形態3にかかる画像補正装置1Aを説明する。図11は、実施の形態3にかかる画像補正装置1Aの構成を示す図である。画像補正装置1Aは、実施の形態1又は2にかかる画像補正装置1が有するすべての構成要素を有する。画像補正装置1Aは、画像補正装置1が有する構成要素以外の構成要素を有する。実施の形態3では、実施の形態1又は2と相違する部分について説明する。画像補正装置1Aは、補正画像生成部13によって生成された補正後の画像を示すデータを記憶する補正画像記憶部15を更に有する。補正画像記憶部15の例は、半導体メモリである。
Embodiment 3 FIG.
Next, an image correction device 1A according to a third embodiment will be described. FIG. 11 is a diagram illustrating a configuration of an image correction device 1A according to the third embodiment. The image correction device 1A has all the components of the image correction device 1 according to the first or second embodiment. The image correction device 1A has components other than the components of the image correction device 1. In the third embodiment, portions different from the first or second embodiment will be described. The image correction device 1A further includes a corrected image storage unit 15 that stores data indicating the corrected image generated by the corrected image generation unit 13. An example of the corrected image storage unit 15 is a semiconductor memory.
 画像補正装置1Aは、補正画像記憶部15に記憶されているデータをもとにした第1の画像と、撮像対象23の位置とをもとに、補間画像を生成する補間画像生成部16を更に有する。第1の画像は、補正部12が補正を行うことによって得た画像である。撮像対象23の位置は、移動量記憶部8に記憶されたデータをもとに特定される。更に言うと、撮像対象23の位置は、通信部3が第2の撮像装置22から受信した画像データによって特定される。 The image correction device 1A includes an interpolation image generation unit 16 that generates an interpolation image based on the first image based on the data stored in the correction image storage unit 15 and the position of the imaging target 23. Have more. The first image is an image obtained by the correction unit 12 performing the correction. The position of the imaging target 23 is specified based on the data stored in the movement amount storage unit 8. More specifically, the position of the imaging target 23 is specified by the image data received by the communication unit 3 from the second imaging device 22.
 具体的には、第1の画像は、補正部12が補正を行った後に補正画像生成部13によって生成された画像である。補間画像は、第1の画像の補正前の画像が得られた第1の時刻と、補正部12が補正を行うことによって第1の画像の次に得る第2の画像の補正前の画像が得られる第2の時刻との間の時刻の撮像対象23の画像である。表示部14は、補間画像生成部16によって生成された補間画像も表示する。 Specifically, the first image is an image generated by the corrected image generation unit 13 after the correction unit 12 performs the correction. The interpolated image includes a first time at which an uncorrected image of the first image is obtained, and an uncorrected image of a second image obtained after the first image by the correction unit 12 performing the correction. It is an image of the imaging target 23 at a time between the obtained second time and the second time. The display unit 14 also displays the interpolation image generated by the interpolation image generation unit 16.
 図12は、実施の形態3にかかる画像補正装置1Aが有する補間画像生成部16の機能を説明するための図である。図12(A)は、第1の時刻において第1の画像を示しており、第2の時刻において第2の画像を示している。図12(B)は、第1の画像と第2の画像とに加えて、第1の時刻と第2の時刻との間の中間の時刻において補間画像生成部16によって生成された補間画像23bを示している。 FIG. 12 is a diagram for explaining the function of the interpolation image generation unit 16 included in the image correction device 1A according to the third embodiment. FIG. 12A illustrates a first image at a first time and a second image at a second time. FIG. 12B shows the interpolation image 23b generated by the interpolation image generation unit 16 at an intermediate time between the first time and the second time in addition to the first image and the second image. Is shown.
 中間の時刻において、第1の撮像装置21が撮像対象23を撮像することによって撮像対象23の画像が撮像されたことを仮定する。加えて、中間の時刻における撮像対象23の画像が、第1の画像の撮像対象23を、第1の画像の撮像対象23の位置から第2の画像の撮像対象23の位置の向きに、第1の画像の撮像対象23の位置から移動量Zだけ移動させた画像であることを仮定する。 It is assumed that the image of the imaging target 23 is captured by the first imaging device 21 capturing the imaging target 23 at an intermediate time. In addition, the image of the imaging target 23 at the intermediate time shifts the imaging target 23 of the first image from the position of the imaging target 23 of the first image to the position of the imaging target 23 of the second image in the second direction. It is assumed that the image has been moved by the movement amount Z from the position of the imaging target 23 of one image.
 図12(A)は、中間の時刻において、第1の画像の撮像対象23を、移動量Zだけ移動した撮像対象23の画像23bを破線で示している。移動量Zは、撮像対象23の位置をもとに特定される。具体的には、移動量Zは、移動量記憶部8に記憶されたデータをもとに特定される。更に言うと、移動量Zは、通信部3が第2の撮像装置22から受信した画像データによって特定される。 12A shows an image 23b of the imaging target 23 that has moved the imaging target 23 of the first image by the moving amount Z at an intermediate time by a broken line. The movement amount Z is specified based on the position of the imaging target 23. Specifically, the movement amount Z is specified based on the data stored in the movement amount storage unit 8. More specifically, the movement amount Z is specified by image data received by the communication unit 3 from the second imaging device 22.
 補間画像生成部16は、第1の画像と、移動量Zとをもとに、図12(B)に示す通り、中間の時刻における撮像対象23の画像である補間画像23bを生成する。具体的には、補間画像生成部16は、第1の画像の撮像対象23を、第1の画像の撮像対象23の位置から第2の画像の撮像対象23の位置の向きに移動量Zだけ移動させて、中間の時刻における撮像対象23の画像である補間画像23bを生成する。 The interpolated image generation unit 16 generates an interpolated image 23b, which is an image of the imaging target 23 at an intermediate time, based on the first image and the movement amount Z, as shown in FIG. Specifically, the interpolation image generation unit 16 moves the imaging target 23 of the first image from the position of the imaging target 23 of the first image by the moving amount Z in the direction of the position of the imaging target 23 of the second image. By moving it, an interpolated image 23b, which is an image of the imaging target 23 at an intermediate time, is generated.
 上述の通り、画像補正装置1Aは、第1の画像と撮像対象23の位置とをもとに、具体的には、第1の画像と移動量Zとをもとに、図12(B)に示す通り、中間の時刻における撮像対象23の画像である補間画像23bを生成する。すなわち、画像補正装置1Aは、第1の画像と第2の画像とに加えて、中間の時刻における撮像対象23の画像をユーザに提示することができる。ユーザは、中間の時刻における撮像対象23の画像である補間画像23bを視認することにより、撮像対象23の移動を確認することができる。 As described above, based on the first image and the position of the imaging target 23, specifically, based on the first image and the movement amount Z, the image correction device 1 </ b> A , An interpolation image 23b that is an image of the imaging target 23 at an intermediate time is generated. That is, the image correction device 1A can present the image of the imaging target 23 at an intermediate time to the user in addition to the first image and the second image. The user can confirm the movement of the imaging target 23 by visually recognizing the interpolated image 23b which is an image of the imaging target 23 at an intermediate time.
 なお、実施の形態1から実施の形態3において、画像補正装置1,1Aは、エンコーダと制御装置とに接続されていてもよい。図13は、実施の形態1から実施の形態3にかかる画像補正装置1,1Aがエンコーダ26と制御装置27とに接続されている状況を示す図である。エンコーダ26は、撮像対象23の位置を検出する装置である。制御装置27は、ベルトコンベア24を制御することにより、撮像対象23の移動速度を制御する装置である。 In the first to third embodiments, the image correction devices 1 and 1A may be connected to an encoder and a control device. FIG. 13 is a diagram illustrating a situation where the image correction devices 1 and 1A according to the first to third embodiments are connected to the encoder 26 and the control device 27. The encoder 26 is a device that detects the position of the imaging target 23. The control device 27 is a device that controls the movement speed of the imaging target 23 by controlling the belt conveyor 24.
 この場合、通信部3は、エンコーダ26と制御装置27との一方又は双方から、撮像対象23の位置に関連する情報を受信する。撮像対象23の位置又は撮像対象23の加速度は、通信部3によって受信された情報によって特定されてもよい。エンコーダ26は、撮像対象23の位置又は速度を検出するセンサに置き換えられてもよい。エンコーダ26が撮像対象23の位置又は速度を検出するセンサに置き換えられた場合、通信部3は、センサから撮像対象23の位置に関連する情報を受信する。 In this case, the communication unit 3 receives information related to the position of the imaging target 23 from one or both of the encoder 26 and the control device 27. The position of the imaging target 23 or the acceleration of the imaging target 23 may be specified by the information received by the communication unit 3. The encoder 26 may be replaced with a sensor that detects the position or speed of the imaging target 23. When the encoder 26 is replaced with a sensor that detects the position or speed of the imaging target 23, the communication unit 3 receives information related to the position of the imaging target 23 from the sensor.
 また、第1の撮像装置21は、深度又は音を検出して検出した結果をもとに画像データを生成する装置であってもよい。 The first imaging device 21 may be a device that detects depth or sound and generates image data based on the detection result.
 実施の形態1にかかる画像補正装置1が有するトリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の一部又は全部の機能は、メモリに格納されるプログラムを実行するプロセッサによって実現されてもよい。 The trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit included in the image correction device 1 according to the first embodiment. Some or all of the functions of 13 may be implemented by a processor that executes a program stored in a memory.
 プロセッサは、CPU(Central Processing Unit)、処理装置、演算装置、マイクロプロセッサ、マイクロコンピュータ、又はDSP(Digital Signal Processor)である。 The processor is a CPU (Central Processing Unit), a processing device, an arithmetic device, a microprocessor, a microcomputer, or a DSP (Digital Signal Processor).
 トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の一部又は全部の機能がプロセッサによって実現される場合、当該一部又は全部の機能は、プロセッサと、ソフトウェア、ファームウェア、又は、ソフトウェア及びファームウェアとの組み合わせにより実現される。ソフトウェア又はファームウェアは、プログラムとして記述され、メモリに格納される。プロセッサは、メモリに記憶されたプログラムを読み出して実行することにより、トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の一部又は全部の機能を実現する。 Part or all of the functions of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13 are realized by a processor. In this case, some or all of the functions are realized by a processor and software, firmware, or a combination of software and firmware. Software or firmware is described as a program and stored in a memory. The processor reads out and executes the program stored in the memory, so that the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the corrected image identification unit 9, the division unit 10, the correction unit 12 And part or all of the functions of the corrected image generation unit 13 are realized.
 トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の一部又は全部の機能がプロセッサによって実現される場合、画像補正装置1は、トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の一部又は全部によって実行されるステップが結果的に実行されることになるプログラムを格納するためのメモリを有する。メモリに格納されるプログラムは、トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の一部又は全部が実行する手順又は方法をコンピュータに実行させるものであるともいえる。 Part or all of the functions of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13 are realized by a processor. In this case, the image correction device 1 includes the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13. It has a memory for storing a program in which the steps performed by some or all will be executed as a result. The programs stored in the memory include a part of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13. Alternatively, it can be said that the computer is caused to execute a procedure or a method that is entirely executed.
 メモリは、例えば、RAM(Random Access Memory)、ROM(Read Only Memory)、フラッシュメモリ、EPROM(Erasable Programmable Read Only Memory)、EEPROM(登録商標)(Electrically Erasable Programmable Read-Only Memory)等の不揮発性もしくは揮発性の半導体メモリ、磁気ディスク、フレキシブルディスク、光ディスク、コンパクトディスク、ミニディスク又はDVD(Digital Versatile Disk)等である。 The memory may be, for example, a non-volatile memory such as a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Read Only Memory), an EEPROM (registered trademark) (Electrically Erasable Programmable Read-Only Memory), or the like. It is a volatile semiconductor memory, a magnetic disk, a flexible disk, an optical disk, a compact disk, a mini disk, a DVD (Digital Versatile Disk), or the like.
 実施の形態1にかかる画像補正装置1が有するトリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の一部又は全部の機能は、処理回路によって実現されてもよい。 The trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit included in the image correction device 1 according to the first embodiment. Some or all of the functions of 13 may be realized by a processing circuit.
 処理回路は、専用のハードウェアである。処理回路は、例えば、単一回路、複合回路、プログラム化されたプロセッサ、並列プログラム化されたプロセッサ、ASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、又はこれらを組み合わせたものである。トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の一部は、残部とは別個の専用のハードウェアであってもよい。 The processing circuit is dedicated hardware. The processing circuit is, for example, a single circuit, a composite circuit, a programmed processor, a parallel programmed processor, an ASIC (Application Specific Integrated Circuit), an FPGA (Field-Programmable Gate Array), or a combination thereof. is there. Some of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13 are dedicated to the individual parts different from the rest. Hardware may be used.
 トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の複数の機能について、当該複数の機能の一部がソフトウェア又はファームウェアで実現され、当該複数の機能の残部が専用のハードウェアで実現されてもよい。このように、トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12及び補正画像生成部13の複数の機能は、ハードウェア、ソフトウェア、ファームウェア、又はこれらの組み合わせによって実現することができる。 Regarding a plurality of functions of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13, Some may be realized by software or firmware, and the rest of the plurality of functions may be realized by dedicated hardware. As described above, a plurality of functions of the trigger generation unit 2, the communication unit 3, the feature extraction unit 5, the movement amount calculation unit 7, the correction image identification unit 9, the division unit 10, the correction unit 12, and the correction image generation unit 13 are implemented by hardware. It can be realized by software, firmware, firmware, or a combination thereof.
 実施の形態3にかかる画像補正装置1Aが有するトリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12、補正画像生成部13及び補間画像生成部16の一部又は全部の機能は、プロセッサによって実現されてもよい。当該一部又は全部の機能がプロセッサによって実現される場合、画像補正装置1Aは、トリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12、補正画像生成部13及び補間画像生成部16の一部又は全部によって実行されるステップが結果的に実行されることになるプログラムを格納するためのメモリを有する。 Trigger generation unit 2, communication unit 3, feature extraction unit 5, movement amount calculation unit 7, correction image identification unit 9, division unit 10, correction unit 12, correction image generation unit included in image correction device 1A according to the third embodiment. Some or all of the functions of the interpolation image generation unit 13 and the interpolation image generation unit 16 may be realized by a processor. When some or all of the functions are implemented by a processor, the image correction apparatus 1A includes a trigger generation unit 2, a communication unit 3, a feature extraction unit 5, a movement amount calculation unit 7, a corrected image identification unit 9, and a division unit 10. , A memory for storing a program in which the steps executed by some or all of the correction unit 12, the correction image generation unit 13 and the interpolation image generation unit 16 are executed as a result.
 実施の形態3にかかる画像補正装置1Aが有するトリガ生成部2、通信部3、特徴抽出部5、移動量演算部7、補正画像特定部9、分割部10、補正部12、補正画像生成部13及び補間画像生成部16の一部又は全部の機能は、処理回路によって実現されてもよい。 Trigger generation unit 2, communication unit 3, feature extraction unit 5, movement amount calculation unit 7, correction image identification unit 9, division unit 10, correction unit 12, correction image generation unit included in image correction device 1A according to the third embodiment. Part or all of the functions of the interpolation image generation unit 13 and the interpolation image generation unit 16 may be realized by a processing circuit.
 以上の実施の形態に示した構成は、本発明の内容の一例を示すものであり、別の公知の技術と組み合わせることも可能であるし、本発明の要旨を逸脱しない範囲で、構成の一部を省略又は変更することも可能である。 The configurations described in the above embodiments are merely examples of the contents of the present invention, and can be combined with another known technology, and can be combined with other known technologies without departing from the gist of the present invention. It is also possible to omit or change the part.
 1,1A 画像補正装置、2 トリガ生成部、3 通信部、4 画像記憶部、5 特徴抽出部、6 特徴記憶部、7 移動量演算部、8 移動量記憶部、9 補正画像特定部、10 分割部、11 補正値記憶部、12 補正部、13 補正画像生成部、14 表示部、15 補正画像記憶部、16 補間画像生成部、21 第1の撮像装置、22 第2の撮像装置、23 撮像対象、23a ぶれ画像、24 ベルトコンベア、26 エンコーダ、27 制御装置、50 撮像システム。 1, 1A image correction device, 2 trigger generation unit, 3 communication unit, 4 image storage unit, 5 feature extraction unit, 6 feature storage unit, 7 movement amount calculation unit, 8 movement amount storage unit, 9 correction image identification unit, 10 Division unit, 11 correction value storage unit, 12 correction unit, 13 correction image generation unit, 14 display unit, 15 correction image storage unit, 16 interpolation image generation unit, 21 first imaging device, 22 second imaging device, 23 Imaging target, 23a blur image, 24 belt conveyor, 26 encoder, 27 control device, 50 imaging system.

Claims (7)

  1.  移動する撮像対象を撮像する第1の撮像装置によって1フレームが形成される時間に得られた画像を、前記時間の最終の時刻に形成された画像である最終時刻画像と、前記時間の前記最終の時刻より前に形成された画像である非最終時刻画像とに分割し、前記最終時刻画像及び前記非最終時刻画像の各々を、前記撮像対象の位置をもとに、前記最終時刻画像及び前記非最終時刻画像の各部分が形成された際に要した露光時間毎の複数の領域に分割する分割部と、
     前記分割部によって得られた前記最終時刻画像及び前記非最終時刻画像に含まれている複数の領域の各々の輝度値を補正する補正部とを備え、
     前記補正部は、前記最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、前記補正対象の領域が形成された際に要した露光時間に対応する第1の補正値を用いて補正し、前記非最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、前記補正対象の領域が形成された際に要した露光時間に対応する第1の補正値と前記非最終時刻画像に対応する第2の補正値とを用いて補正する
     ことを特徴とする画像補正装置。
    An image obtained at a time when one frame is formed by the first imaging device that captures the moving imaging target is converted into a final time image that is an image formed at the last time of the time, and the final time image of the time. Is divided into a non-final time image which is an image formed before the time of the final time image and the non-final time image, based on the position of the imaging target, the final time image and the A dividing unit that divides the non-final time image into a plurality of regions for each exposure time required when each part is formed,
    A correction unit that corrects the luminance value of each of the plurality of regions included in the final time image and the non-final time image obtained by the division unit,
    The correction unit may determine, for a plurality of regions included in the final time image, a luminance value of the region to be corrected, a first value corresponding to an exposure time required when the region to be corrected is formed. Correction is performed using the correction value, and for a plurality of areas included in the non-final time image, the brightness value of the correction target area corresponds to the exposure time required when the correction target area is formed. An image correction apparatus that performs correction using a first correction value to be corrected and a second correction value corresponding to the non-final time image.
  2.  前記第1の撮像装置が前記撮像対象を撮像する際のフレームレートより高いフレームレートで前記撮像対象を撮像する第2の撮像装置から画像データを受信する、前記撮像対象の位置若しくは速度を検出するセンサから前記撮像対象の位置に関連する情報を受信する、又は、前記撮像対象の移動速度を制御する制御装置から前記撮像対象の位置に関連する情報を受信する通信部を更に備え、
     前記分割部は、前記通信部によって受信された画像データ又は情報によって特定される前記撮像対象の位置をもとに、前記最終時刻画像及び前記非最終時刻画像の各々を、前記複数の領域に分割する
     ことを特徴とする請求項1に記載の画像補正装置。
    The first imaging device receives image data from a second imaging device that captures the imaging target at a higher frame rate than when the imaging target captures the imaging target, and detects the position or speed of the imaging target. A communication unit that receives information related to the position of the imaging target from a sensor, or further includes a communication unit that receives information related to the position of the imaging target from a control device that controls a moving speed of the imaging target,
    The dividing unit divides each of the final time image and the non-final time image into the plurality of regions based on the position of the imaging target specified by the image data or information received by the communication unit. The image correction apparatus according to claim 1, wherein:
  3.  前記撮像対象は、加速度運動しており、
     前記分割部は、前記非最終時刻画像を、前記非最終時刻画像の各部分が形成された際の露光時間と前記撮像対象の加速度とに対応する複数の領域に分割し、
     前記第2の補正値は、前記非最終時刻画像に対応すると共に、前記撮像対象の加速度によって異なり、
     前記補正部は、前記非最終時刻画像に含まれている前記複数の領域については、補正対象の領域の輝度値を、前記補正対象の領域が形成された際に要した露光時間に対応する第1の補正値と前記補正対象の領域が形成された際の前記撮像対象の加速度に対応する前記第2の補正値とを用いて補正する
     ことを特徴とする請求項1に記載の画像補正装置。
    The imaging target is accelerating,
    The dividing unit divides the non-final time image into a plurality of regions corresponding to the exposure time and the acceleration of the imaging target when each part of the non-final time image is formed,
    The second correction value corresponds to the non-final time image and varies depending on the acceleration of the imaging target.
    For the plurality of regions included in the non-final time image, the correction unit may set a luminance value of the correction target region to a value corresponding to an exposure time required when the correction target region is formed. 2. The image correction apparatus according to claim 1, wherein the correction is performed using a correction value of 1 and the second correction value corresponding to the acceleration of the imaging target when the correction target region is formed. .
  4.  前記第1の撮像装置が前記撮像対象を撮像する際のフレームレートより高いフレームレートで前記撮像対象を撮像する第2の撮像装置から画像データを受信する、前記撮像対象の位置若しくは速度を検出するセンサから前記撮像対象の位置に関連する情報を受信する、又は、前記撮像対象の移動速度を制御する制御装置から前記撮像対象の位置に関連する情報を受信する通信部を更に備え、
     前記分割部は、前記通信部によって受信された画像データ又は情報によって特定される前記撮像対象の位置及び加速度をもとに、前記非最終時刻画像を、前記非最終時刻画像の各部分が形成された際の露光時間と前記撮像対象の加速度とに対応する複数の領域に分割し、
     前記補正部は、前記非最終時刻画像に含まれている前記複数の領域の各々の輝度値を補正する際、前記通信部によって受信された画像データ又は情報によって特定される前記撮像対象の加速度に対応する前記第2の補正値を用いて補正を行う
     ことを特徴とする請求項3に記載の画像補正装置。
    The first imaging device receives image data from a second imaging device that captures the imaging target at a higher frame rate than when the imaging target captures the imaging target, and detects the position or speed of the imaging target. A communication unit that receives information related to the position of the imaging target from a sensor, or further includes a communication unit that receives information related to the position of the imaging target from a control device that controls a moving speed of the imaging target,
    The division unit, based on the position and acceleration of the imaging target specified by the image data or information received by the communication unit, based on the non-final time image, each part of the non-final time image is formed Divided into a plurality of regions corresponding to the exposure time and the acceleration of the imaging target when
    The correction unit, when correcting the brightness value of each of the plurality of regions included in the non-final time image, the acceleration of the imaging target specified by the image data or information received by the communication unit The image correction apparatus according to claim 3, wherein the correction is performed using the corresponding second correction value.
  5.  前記補正部が補正を行うことによって得た第1の画像と前記撮像対象の位置とをもとに、前記第1の画像の補正前の画像が得られた第1の時刻と前記補正部が補正を行うことによって前記第1の画像の次に得る第2の画像の補正前の画像が得られる第2の時刻との間の時刻の前記撮像対象の画像を生成する補間画像生成部を更に備える
     ことを特徴とする請求項1又は3に記載の画像補正装置。
    Based on a first image obtained by performing the correction by the correction unit and the position of the imaging target, a first time when an image before correction of the first image is obtained and the correction unit An interpolation image generation unit that generates the image of the imaging target at a time between a second time at which an uncorrected image of a second image obtained after the first image is obtained by performing the correction. The image correction device according to claim 1, further comprising:
  6.  前記第1の撮像装置が前記撮像対象を撮像する際のフレームレートより高いフレームレートで前記撮像対象を撮像する第2の撮像装置から画像データを受信する、前記撮像対象の位置若しくは速度を検出するセンサから前記撮像対象の位置に関連する情報を受信する、又は、前記撮像対象の移動速度を制御する制御装置から前記撮像対象の位置に関連する情報を受信する通信部を更に備え、
     前記補間画像生成部は、前記通信部によって受信される画像データ又は情報によって特定される前記撮像対象の位置をもとに、前記第1の時刻と前記第2の時刻との間の時刻の前記撮像対象の画像を生成する
     ことを特徴とする請求項5に記載の画像補正装置。
    The first imaging device receives image data from a second imaging device that captures the imaging target at a higher frame rate than when the imaging target captures the imaging target, and detects the position or speed of the imaging target. A communication unit that receives information related to the position of the imaging target from a sensor, or further includes a communication unit that receives information related to the position of the imaging target from a control device that controls a moving speed of the imaging target,
    The interpolation image generation unit, based on the position of the imaging target specified by the image data or information received by the communication unit, based on the time between the first time and the second time The image correction device according to claim 5, wherein the image correction device generates an image of an imaging target.
  7.  移動する撮像対象を撮像する第1の撮像装置によって1フレームが形成される時間に得られた画像を、前記時間の最終の時刻に形成された画像である最終時刻画像と、前記時間の前記最終の時刻より前に形成された画像である非最終時刻画像とに分割し、前記最終時刻画像及び前記非最終時刻画像の各々を、前記撮像対象の位置をもとに、前記最終時刻画像及び前記非最終時刻画像の各部分が形成された際に要した露光時間毎の複数の領域に分割する分割ステップと、
     前記分割ステップにおいて得られた前記最終時刻画像及び前記非最終時刻画像に含まれている複数の領域の各々の輝度値を補正する補正ステップとを含み、
     前記補正ステップにおいて、前記最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、前記補正対象の領域が形成された際に要した露光時間に対応する第1の補正値を用いて補正し、前記非最終時刻画像に含まれている複数の領域については、補正対象の領域の輝度値を、前記補正対象の領域が形成された際に要した露光時間に対応する第1の補正値と前記非最終時刻画像に対応する第2の補正値とを用いて補正する
     ことを特徴とする画像補正方法。
    An image obtained at a time when one frame is formed by the first imaging device that captures the moving imaging target is converted into a final time image that is an image formed at the last time of the time, and the final time image of the time. Is divided into a non-final time image, which is an image formed before the time, and each of the final time image and the non-final time image is based on the position of the imaging target, the final time image and the non-final time image. A dividing step of dividing into a plurality of regions for each exposure time required when each part of the non-final time image is formed,
    Correcting the luminance value of each of the plurality of regions included in the final time image and the non-final time image obtained in the dividing step,
    In the correcting step, for a plurality of areas included in the final time image, the brightness value of the area to be corrected is set to a first value corresponding to an exposure time required when the area to be corrected is formed. Correction is performed using the correction value, and for a plurality of regions included in the non-final time image, the brightness value of the correction target region corresponds to the exposure time required when the correction target region is formed. Using a first correction value to be corrected and a second correction value corresponding to the non-final time image.
PCT/JP2018/027529 2018-07-23 2018-07-23 Image correction apparatus and image correction method WO2020021601A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2019504145A JP6502003B1 (en) 2018-07-23 2018-07-23 Image correction apparatus and image correction method
PCT/JP2018/027529 WO2020021601A1 (en) 2018-07-23 2018-07-23 Image correction apparatus and image correction method
CN201880095738.7A CN112425147B (en) 2018-07-23 2018-07-23 Image correction device and image correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2018/027529 WO2020021601A1 (en) 2018-07-23 2018-07-23 Image correction apparatus and image correction method

Publications (1)

Publication Number Publication Date
WO2020021601A1 true WO2020021601A1 (en) 2020-01-30

Family

ID=66166748

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2018/027529 WO2020021601A1 (en) 2018-07-23 2018-07-23 Image correction apparatus and image correction method

Country Status (3)

Country Link
JP (1) JP6502003B1 (en)
CN (1) CN112425147B (en)
WO (1) WO2020021601A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10276361A (en) * 1997-01-31 1998-10-13 Sony Corp Image pickup device and method, image pickup system, image information service device and method, image data processing unit and method, and transmission medium
JP2009017223A (en) * 2007-07-04 2009-01-22 Sony Corp Imaging device, image processing device, and their image processing method and program
JP2009044236A (en) * 2007-08-06 2009-02-26 Fujifilm Corp White balance adjustment device and white balance adjustment method
JP2017187434A (en) * 2016-04-08 2017-10-12 日本アビオニクス株式会社 Infrared imaging device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3804617B2 (en) * 2003-02-14 2006-08-02 コニカミノルタフォトイメージング株式会社 Image processing apparatus and method
CN102831870B (en) * 2009-08-18 2015-05-13 夏普株式会社 Display device and method for correcting uneven brightness of display device
US8310538B2 (en) * 2010-03-19 2012-11-13 Fujifilm Corporation Imaging apparatus, method, program, and recording medium used in the program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH10276361A (en) * 1997-01-31 1998-10-13 Sony Corp Image pickup device and method, image pickup system, image information service device and method, image data processing unit and method, and transmission medium
JP2009017223A (en) * 2007-07-04 2009-01-22 Sony Corp Imaging device, image processing device, and their image processing method and program
JP2009044236A (en) * 2007-08-06 2009-02-26 Fujifilm Corp White balance adjustment device and white balance adjustment method
JP2017187434A (en) * 2016-04-08 2017-10-12 日本アビオニクス株式会社 Infrared imaging device

Also Published As

Publication number Publication date
JP6502003B1 (en) 2019-04-17
JPWO2020021601A1 (en) 2020-07-30
CN112425147A (en) 2021-02-26
CN112425147B (en) 2021-08-27

Similar Documents

Publication Publication Date Title
US9438792B2 (en) Image-processing apparatus and image-processing method for generating a virtual angle of view
JP4509925B2 (en) Image processing apparatus, camera system, image processing method, and moving image display method
US9946955B2 (en) Image registration method
US11722771B2 (en) Information processing apparatus, imaging apparatus, and information processing method each of which issues a notification of blur of an object, and control method for the imaging apparatus
US7495692B2 (en) Image processing device and electronic camera
JP6151930B2 (en) Imaging apparatus and control method thereof
JP2017175364A (en) Image processing device, imaging device, and control method of image processing device
US10348966B2 (en) Optical apparatus and a control method for performing image readout and reconstruction based on a delayed readout from different partial regions
US10009547B2 (en) Image pickup apparatus that compensates for flash band, control method therefor, and storage medium
WO2019151030A1 (en) Imaging device, solid-state imaging element, camera module, drive control unit, and imaging method
CN106454063B (en) Method and apparatus for correcting jitter
JP2014127773A5 (en)
WO2020021601A1 (en) Image correction apparatus and image correction method
JP7387713B2 (en) Imaging device, solid-state imaging device, camera module, drive control unit, and imaging method
JP2006115346A (en) Imaging apparatus and camera shake correction method
JP2014138331A (en) Imaging apparatus and program
JP2015095670A (en) Imaging apparatus, control method thereof and control program
JP5955003B2 (en) Image processing apparatus, image processing method, and program
JP2005130159A (en) Imaging apparatus and camera shake correcting method thereof
JP2014215304A (en) Imaging device
US20180041706A1 (en) Image processing apparatus, optical apparatus, and image processing method
JP5505072B2 (en) Image data processing apparatus and image data processing method
JP6468751B2 (en) Image processing apparatus, image processing method, and program
JP7358653B2 (en) Imaging device, driving method, and imaging program
US10681274B2 (en) Imaging apparatus and control method thereof

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2019504145

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928100

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 18928100

Country of ref document: EP

Kind code of ref document: A1