WO2021233177A1 - 图像处理装置、摄像装置、移动体、程序以及方法 - Google Patents

图像处理装置、摄像装置、移动体、程序以及方法 Download PDF

Info

Publication number
WO2021233177A1
WO2021233177A1 PCT/CN2021/093317 CN2021093317W WO2021233177A1 WO 2021233177 A1 WO2021233177 A1 WO 2021233177A1 CN 2021093317 W CN2021093317 W CN 2021093317W WO 2021233177 A1 WO2021233177 A1 WO 2021233177A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
flicker
reduced
images
image processing
Prior art date
Application number
PCT/CN2021/093317
Other languages
English (en)
French (fr)
Inventor
佐藤数史
邵明
Original Assignee
深圳市大疆创新科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市大疆创新科技有限公司 filed Critical 深圳市大疆创新科技有限公司
Publication of WO2021233177A1 publication Critical patent/WO2021233177A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré or halo
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment

Definitions

  • the invention relates to an image processing device, a camera device, a mobile body, a program and a method.
  • Patent Document 1 describes a technique for reducing the flicker of a fluorescent lamp caused by an image signal obtained from an imaging element.
  • Patent Document 2 describes that the surrounding flicker light source is detected, and imaging is performed to reduce the effect of flicker based on the detected flicker period and phase.
  • Patent Document 1 Japanese Patent Laid-Open No. 2004-222228
  • Patent Document 2 Japanese Patent Laid-Open No. 2016-86206
  • the image processing device includes a circuit, and the circuit is configured to perform image processing to reduce flicker on a moving image.
  • the circuit is configured to generate a plurality of reduced images by reducing the number of pixels of each of the plurality of images constituting the moving image.
  • the circuit is configured to generate a second image in which the flicker component in the first image included in the plurality of images is reduced using an image obtained by adding a plurality of reduced images.
  • the circuit is configured to generate a third image by increasing the number of pixels of the second image.
  • the circuit is configured to generate a difference image between the first image and the low spatial frequency component image of the first image.
  • the circuit is configured to generate an output image corresponding to the first image by adding the third image and the difference image.
  • the circuit may be configured to increase the number of pixels of the first reduced image generated by reducing the number of pixels of the first image, thereby generating the third image.
  • the circuit may be configured to perform image processing to reduce the flicker when the size of the flicker component detected from the dynamic image exceeds a preset value.
  • the circuit can be configured to detect flicker components from multiple reduced images.
  • the circuit may be configured to perform image processing for reducing flicker on each of the plurality of images when the size of the flicker component detected from the plurality of reduced images exceeds a preset value.
  • the circuit may be configured to perform image processing to reduce flicker in an area where a high frequency component greater than a preset value is detected.
  • the circuit may be configured to perform image processing to reduce flicker when the amount of motion in the dynamic image is lower than a preset value.
  • the circuit may be configured to generate the second image using an image obtained by adding a plurality of reduced images with mutually different weights when the gamma correction is performed on the moving image.
  • the circuit can be configured as follows: when the color space format of the dynamic image is the YUV format, only the Y signal of the dynamic image is subjected to image processing to reduce flicker.
  • the imaging device includes the above-mentioned image processing device and an image sensor that generates a moving image.
  • the moving body according to the third aspect of the present invention includes the above-mentioned imaging device and moves.
  • the moving body may be an unmanned aircraft.
  • the program according to the fourth aspect of the present invention causes a computer to perform image processing to reduce flicker on a moving image.
  • the program causes the computer to generate a plurality of reduced images by reducing the number of pixels of each of the plurality of images constituting the moving image.
  • the program causes the computer to use the image obtained by adding the multiple reduced images to generate a second image in which the flicker component in the first image included in the multiple images is reduced.
  • the program causes the computer to generate a third image by increasing the number of pixels of the second image.
  • the program causes the computer to generate a difference image between the first image and the low spatial frequency component image of the first image.
  • the program causes the computer to generate an output image corresponding to the first image by adding the third image and the difference image.
  • the program can be stored in a non-temporary storage medium.
  • the method according to the fifth aspect of the present invention performs image processing to reduce flicker on moving images.
  • the method includes a stage of generating a plurality of reduced images by reducing the number of pixels of each of the plurality of images constituting the dynamic image.
  • the method includes a stage of generating a second image in which the flicker component in the first image included in the plurality of images is reduced by using an image obtained by adding a plurality of reduced images.
  • the method includes a stage of generating a third image by increasing the number of pixels of the second image.
  • the method includes a stage of generating a difference image between the first image and the low spatial frequency component image of the first image.
  • the method includes a stage of generating an output image corresponding to the first image by adding the third image and the difference image.
  • the amount of processing for reducing the flicker component in a moving image can be reduced.
  • FIG. 1 is a diagram showing an example of an external perspective view of an imaging device 100 according to this embodiment.
  • FIG. 2 is a diagram showing functional blocks of the imaging device 100 according to this embodiment.
  • Fig. 3 schematically shows the relationship between the brightness change of the illumination and the exposure period.
  • FIG. 4 schematically shows the flow of the de-flicker processing performed by the control unit 110.
  • FIG. 5 schematically shows the module configuration of the image processing unit 500 that executes the image processing related to the de-flickering processing in the control unit 110.
  • FIG. 6 shows a flowchart showing the flow of processing executed by the image processing unit 500.
  • FIG. 7 schematically shows the configuration of the moving image data generating module of the control unit 110.
  • FIG. 8 schematically shows another configuration of the moving image data generating module of the control unit 110.
  • FIG. 9 schematically shows the flow of other de-flickering processing executed by the control unit 110.
  • Fig. 10 shows an example of an unmanned aerial vehicle (UAV).
  • UAV unmanned aerial vehicle
  • FIG. 11 shows an example of a computer 1200 that can embody aspects of the present invention in whole or in part.
  • the blocks can represent (1) the stages of the process of performing operations or (2) the "parts" of the device that perform operations. Specific stages and “parts” can be implemented by programmable circuits and/or processors.
  • Dedicated circuits may include digital and/or analog hardware circuits. May include integrated circuits (ICs) and/or discrete circuits.
  • Programmable circuits may include reconfigurable hardware circuits.
  • Reconfigurable hardware circuits can include logical AND, logical OR, logical exclusive OR, logical NAND, logical NOR, and other logical operations, flip-flops, registers, field programmable gate arrays (FPGA), programmable logic arrays (PLA) ) And other memory components.
  • the computer-readable medium may include any tangible device that can store instructions to be executed by a suitable device.
  • the computer-readable medium on which instructions are stored includes a product that includes instructions that can be executed to create means for performing operations specified by the flowchart or block diagram.
  • electronic storage media, magnetic storage media, optical storage media, electromagnetic storage media, semiconductor storage media, and the like may be included.
  • the computer-readable medium may include floppy disk (registered trademark), floppy disk, hard disk, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory ), electrically erasable programmable read-only memory (EEPROM), static random access memory (SRAM), compact disc read-only memory (CD-ROM), digital versatile disc (DVD), Blu-ray (registered trademark) disc, memory stick , Integrated circuit cards, etc.
  • floppy disk registered trademark
  • floppy disk hard disk
  • random access memory RAM
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory electrically erasable programmable read-only memory
  • SRAM static random access memory
  • CD-ROM compact disc read-only memory
  • DVD digital versatile disc
  • Blu-ray registered trademark
  • the computer-readable instructions may include any one of source code or object code described in any combination of one or more programming languages.
  • the source code or object code includes a traditional procedural programming language.
  • Traditional procedural programming languages can be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, state setting data, or Smalltalk (registered trademark), JAVA (registered trademark) , C++ and other object-oriented programming languages and "C" programming language or similar programming languages.
  • the computer-readable instructions may be provided locally or via a wide area network (WAN) such as a local area network (LAN) or the Internet to a processor or programmable circuit of a general-purpose computer, a special-purpose computer, or other programmable data processing device.
  • WAN wide area network
  • LAN local area network
  • the processor or programmable circuit can execute computer-readable instructions to create means for performing the operations specified in the flowchart or block diagram.
  • Examples of processors include computer processors, processing units, microprocessors, digital signal processors, controllers, microcontrollers, and so on.
  • FIG. 1 is a diagram showing an example of an external perspective view of an imaging device 100 according to this embodiment.
  • FIG. 2 is a diagram showing functional blocks of the imaging device 100 according to this embodiment.
  • the imaging device 100 includes an imaging unit 102 and a lens unit 200.
  • the imaging unit 102 includes an image sensor 120, a control unit 110, a memory 130, an instruction unit 162, a display unit 160, and a communication unit 170.
  • the image sensor 120 may be composed of CCD or CMOS.
  • the image sensor 120 receives light through the lens 210 included in the lens part 200.
  • the image sensor 120 outputs image data of the optical image formed by the lens 210 to the control unit 110.
  • the control unit 110 may be constituted by a microprocessor such as a CPU or an MPU, a microcontroller such as an MCU, or the like.
  • the memory 130 may be a computer-readable recording medium, and may also include at least one of flash memory such as SRAM, DRAM, EPROM, EEPROM, and USB memory.
  • the control unit 110 corresponds to a circuit.
  • the memory 130 stores programs and the like necessary for the control unit 110 to control the image sensor 120 and the like.
  • the memory 130 may be provided inside the housing of the imaging device 100.
  • the storage 130 may be configured to be detachable from the housing of the imaging device 100.
  • the instruction unit 162 is a user interface that receives instructions to the imaging device 100 from the user.
  • the display unit 160 displays an image captured by the image sensor 120 and processed by the control unit 110, various setting information of the imaging device 100, and the like.
  • the display part 160 may be composed of a touch panel.
  • the control unit 110 controls the lens unit 200 and the image sensor 120. For example, the control unit 110 controls the position and focal length of the focal point of the lens 210.
  • the control unit 110 outputs a control command to the lens control unit 220 included in the lens unit 200 based on the information indicating the user's instruction, thereby controlling the lens unit 200.
  • the lens unit 200 includes one or more lenses 210, a lens driving unit 212, a lens control unit 220, and a memory 222.
  • one or more lenses 210 are collectively referred to as “lens 210”.
  • the lens 210 may include a focus lens and a zoom lens. At least a part or all of the lenses included in the lens 210 are arranged to be movable along the optical axis of the lens 210.
  • the lens part 200 may be an interchangeable lens detachably provided on the imaging part 102.
  • the lens driving unit 212 moves at least a part or all of the lens 210 along the optical axis of the lens 210.
  • the lens control unit 220 drives the lens drive unit 212 according to the lens control instruction from the imaging unit 102 to move the entire lens 210 or the zoom lens or the focus lens included in the lens 210 along the optical axis, thereby performing the zoom operation or focus operation. at least one.
  • the lens control command is, for example, a zoom control command, a focus control command, and the like.
  • the lens driving part 212 may include a voice coil motor (VCM) that moves at least a part or all of the plurality of lenses 210 in the optical axis direction.
  • the lens driving part 212 may include a motor such as a DC motor, a coreless motor, or an ultrasonic motor.
  • the lens driving unit 212 can transmit the power from the motor to at least a part or all of the plurality of lenses 210 via mechanism components such as cam rings and guide shafts, so as to move at least a part or all of the lenses 210 along the optical axis.
  • the memory 222 stores control values for the focus lens and zoom lens that are moved by the lens drive unit 212.
  • the memory 222 may include at least one of flash memory such as SRAM, DRAM, EPROM, EEPROM, and USB memory.
  • the control unit 110 outputs a control command to the image sensor 120 based on the information indicating the user's instruction acquired by the instruction unit 162 and the like to perform control including the control of the imaging operation on the image sensor 120.
  • the control unit 110 acquires an image captured by the image sensor 120.
  • the control unit 110 performs image processing on the image acquired from the image sensor 120 and stores it in the memory 130.
  • the communication unit 170 is responsible for communication with the outside.
  • the communication unit 170 transmits the information generated by the control unit 110 to the outside through the communication network.
  • the communication unit 170 provides information received from the outside to the control unit 110 via a communication network.
  • the control unit 110 performs image processing to reduce flicker on the moving image.
  • the control unit 110 generates a plurality of reduced images by reducing the number of pixels of each of the plurality of images constituting the moving image.
  • the control unit 110 uses an image obtained by adding a plurality of reduced images to generate a second image in which the flicker component of the first image included in the plurality of images is reduced.
  • the control unit 110 generates a third image by increasing the number of pixels of the second image.
  • the control section 110 generates a difference image between the first image and the low spatial frequency component image of the first image.
  • the control unit 110 adds the third image and the difference image to generate an output image corresponding to the first image.
  • the control unit 110 may increase the number of pixels of the first reduced image generated by reducing the number of pixels of the first image, thereby generating the third image.
  • the control part 110 may generate the third image by up-sampling the first reduced image.
  • the control unit 110 may generate a low spatial frequency component image by extracting a low spatial frequency component from the first image.
  • the control section 110 When the size of the flicker component detected from the moving image exceeds a preset value, the control section 110 performs image processing to reduce flicker.
  • the control unit 110 may detect flicker components from a plurality of reduced images.
  • the control section 110 may perform image processing for reducing flicker on each of the plurality of images.
  • the control unit 110 may perform image processing to reduce flicker on the area where the high frequency component greater than the preset value is detected.
  • the control part 110 may perform image processing to reduce flicker.
  • control unit 110 may use an image obtained by adding a plurality of reduced images with mutually different weights to generate the second image.
  • the color space format of the dynamic image is the YUV format
  • the circuit can only perform image processing to reduce flicker on the Y signal of the dynamic image.
  • FIG. 3 schematically shows the relationship between the brightness change of the illumination and the exposure period. If a non-inverter-type lighting is driven by power with a power supply frequency of 50 Hz, the brightness of the lighting changes at a cycle of 1/100 second as shown in Fig. 3.
  • FIG. 3 shows the exposure period when the imaging unit 102 performs moving image shooting with a frame period of 1/60 second in accordance with the vertical synchronization signal. As shown in FIG. 3, between consecutive frame periods for shooting frame I1, frame I2, and frame I3, the exposure time of the frame is deviated from the brightness change of the illumination. Thus, the exposure amount of the illumination light in one exposure period is changed between consecutive frame periods. Therefore, a phenomenon (flicker phenomenon) in which the brightness changes occur in the moving image.
  • the phase of the start time of the exposure period becomes the same phase every three frames.
  • the phase of the start time of the exposure period of the shooting frame I1 is the same as the phase of the start time of the exposure period of the frame I4 after shooting three frames. Therefore, the exposure amount of the illumination light is repeated every three frames.
  • the spectra of the brightness or color components can be extracted from the three-frame superimposed image, and the amplitude and phase of the flicker component can be determined based on the extracted spectrum, so as to reduce the flicker component in the frame. De-flashing processing.
  • the control unit 110 performs de-flicker processing on a reduced frame obtained by reducing a plurality of frames generated by the image sensor 120. As a result, it is possible to suppress an increase in the amount of processing required to execute the de-flicker processing.
  • control unit 110 extracts a high-frequency spatial frequency component from the frame, and uses the reduced frame image and the high-frequency spatial frequency component generated by the image sensor 120 to generate an output frame. As a result, it is possible to suppress the reduction in resolution due to the de-flicker processing.
  • FIG. 4 schematically shows the flow of the de-flicker processing performed by the control unit 110.
  • the previous frame 401, the current frame 402, and the next frame 403 are images constituting a moving image.
  • the previous frame 401, the current frame 402, and the next frame 403 are frames continuously generated by the image sensor 120 and input to the de-flicker processing unit of the control unit 110.
  • the previous frame 401, the current frame 402, and the next frame 403 are images with N ⁇ M pixels.
  • the name "current frame” is to clearly show the name of the frame that serves as a reference among three consecutive frames, and does not mean "current”.
  • the names of "previous frame” and “next frame” are used to clearly show the shooting order of the "current frame”.
  • the control unit 110 down-samples the previous frame 401 to generate a reduced frame 411.
  • the control unit 110 generates a reduced frame 412 by down-sampling the current frame 402.
  • the control unit 110 down-samples the next frame 403 to generate a reduced frame 413.
  • the reduced frame 411, the reduced frame 412, and the reduced frame 413 are images having N/2 ⁇ M/2 pixels.
  • the control unit 110 uses the reduced frame 411, the reduced frame 412, and the reduced frame 413 to perform de-flicker processing. Specifically, as described in the related description of FIG. 3, etc., the de-flicker image 432 is generated using a frame obtained by adding and averaging the reduced frame 411, the reduced frame 412, and the reduced frame 413. In addition, the control unit 110 generates an enlarged image 442 by up-sampling the de-flickering image 432.
  • the enlarged image 442 is an image having the number of N ⁇ M pixels.
  • the control unit 110 generates a low-frequency component image 452 by up-sampling the reduced frame 412.
  • the low-frequency component image 452 is an image having N ⁇ M pixels.
  • the low-frequency component image 452 is an image obtained by performing down-sampling processing and up-sampling processing on the current frame 402. Therefore, it can be said that the low-frequency component image 452 is an image in which the high-frequency spatial frequency component in the current frame 402 is reduced.
  • the control unit 110 generates a high-frequency component image 462 based on the difference between the current frame 402 and the low-frequency component image 452. It can be said that the high-frequency component image 462 is an image having N ⁇ M pixels and the high-frequency spatial frequency component is extracted from the current frame 402.
  • the control unit 110 generates an output frame 470 by adding the high-frequency component image 462 and the enlarged image 442.
  • the output frame 470 is an image having the number of N ⁇ M pixels.
  • the control unit 110 since the control unit 110 performs the de-flicker processing on the reduced frame 411, the reduced frame 412, and the reduced frame 413, it is possible to suppress an increase in the amount of processing required for the de-flicker processing.
  • the control unit 110 adds the high-frequency component image extracted from the current frame 402 and the enlarged image 442, it is possible to suppress the amount of reduction in high-frequency components due to the de-flicker processing caused by the superimposition of the frames.
  • FIG. 5 schematically shows the module configuration of the image processing unit 500 that executes the image processing related to the de-flickering processing in the control unit 110.
  • the down-sampling unit 510 down-samples the previous frame 401 to generate a reduced frame 411.
  • the down-sampling unit 520 generates a reduced frame 412 by down-sampling the current frame 402.
  • the down-sampling unit 530 generates a reduced frame 413 by down-sampling the next frame 403.
  • Downsampling is the process of reducing the number of pixels. Examples of downsampling include processing to calculate a weighted average of surrounding pixels of the pixel of interest, pixel thinning processing, and the like.
  • the de-flicker unit 540 performs de-flicker processing on the reduced frame 411, the reduced frame 412, and the reduced frame 413 to generate a de-flicker image 432.
  • the up-sampling unit 550 generates an enlarged image 442 by up-sampling the de-flickering image 432. Upsampling is the process of increasing the number of pixels.
  • the up-sampling can be, for example, interpolation processing using the pixel values of adjacent pixels.
  • the up-sampling unit 560 generates a low-frequency component image 452 by up-sampling the reduced frame 412 generated by the down-sampling unit 520.
  • the difference processing unit 570 generates a high-frequency component image 462 by subtracting the low-frequency component image 452 generated by the up-sampling unit 560 from the current frame 402 on a pixel-by-pixel basis.
  • the addition unit 580 generates an output frame 470 by adding the high-frequency component image 462 generated by the difference processing unit 570 and the enlarged image 442 generated by the up-sampling unit 550 on a pixel-by-pixel basis.
  • the image processing unit 500 repeats the above-mentioned processing for three consecutive frames to generate a plurality of output frames in which the flicker component in the moving image is reduced.
  • FIG. 6 shows a flowchart showing the flow of processing performed by the image processing unit 500.
  • the down-sampling unit 510, the down-sampling unit 520, and the down-sampling unit 530 respectively down-sample the previous frame 401, the current frame 402, and the next frame 403 to generate a reduced frame 411, a reduced frame 412, and a reduced frame 413. .
  • the image processing unit 500 determines whether flicker has occurred. For example, the image processing unit 500 extracts flicker components from the reduced frame 411, the reduced frame 412, and the reduced frame 413. Specifically, the image processing unit 500 uses the image extraction spectrum obtained by superimposing the reduced frame 411, the reduced frame 412, and the reduced frame 413 to extract the flicker component. When the amplitude of the extracted flicker component exceeds the preset value, the image processing part 500 determines that flicker is generated; when the amplitude of the flicker component is less than or equal to the preset value, the image processing part 500 determines that no flicker is generated. In addition, the image processing unit 500 may use the reduced frame generated in S600 to perform the flicker detection process described in Patent Document 2 described above. As a result, the amount of processing required for flicker detection processing can be reduced.
  • the image processing section 500 When it is determined that no flicker has occurred, in S680, the image processing section 500 outputs the current frame as an output frame.
  • the image processing unit 500 performs de-flicker processing using a plurality of reduced frames 411, reduced frames 412, and reduced frames 413, and generates a de-flickered image 432.
  • the up-sampling part 550 increases the number of pixels by up-sampling the de-flickering image 432, thereby generating an enlarged image 442.
  • the up-sampling unit 560 increases the number of pixels by up-sampling the reduced frame 412 corresponding to the current frame 402, thereby generating a low-frequency component image 452.
  • the difference processing unit 570 generates a high-frequency component image 462 by subtracting the low-frequency component image 452 from the current frame 402.
  • the addition unit 580 adds the high-frequency component image 462 generated in S650 and the enlarged image 442 generated in S630 to generate an output frame 470.
  • the output frame 470 generated in S660 is output as a frame constituting a moving image in which the flicker component is reduced.
  • the image processing unit 500 may not perform the flicker detection processing in S610, but may perform the processing from S620 to S670.
  • the image processing unit 500 may not perform the above-mentioned de-flickering processing method.
  • the amount of motion in the dynamic image may be the global motion of the camera 100.
  • the amount of motion in the dynamic image may be translation, zoom, rotation, etc. of the imaging device 100.
  • the image processing unit 500 may acquire the amount of motion in the moving image based on the detection value of the gyro sensor provided in the imaging device 100.
  • the image processing unit 500 can detect the amount of motion in the moving image from a plurality of frames constituting the moving image.
  • the image processing unit 500 may detect the amount of motion in the moving image from a plurality of reduced frames.
  • the image processing unit 500 may perform de-flickering processing on only a partial area.
  • the image processing unit 500 can select a partial area with a higher spatial frequency component than a preset frequency greater than a preset value, and perform de-flickering processing on the selected partial area, and not perform the de-flickering processing on the selected partial area.
  • the above-mentioned de-flicker processing may also not need to determine whether to perform de-flicker processing on the entire area of the image. When a partial area with a higher spatial frequency component than a preset frequency is greater than a preset value, the aforementioned de-flicker processing is performed on the partial area.
  • FIG. 7 schematically shows the configuration of the moving image data generating module of the control unit 110.
  • the RAW data represents image data of frames generated in the image sensor 120 and sequentially input to the control unit 110.
  • RAW data has brightness values of color components preset per pixel.
  • the storage unit 780 stores image data necessary for image processing in the image processing unit 500.
  • the image processing unit 500 reduces the sequentially input RAW data and stores it in the storage unit 780.
  • the image processing unit 500 acquires the reduced frames of the previous frame and the current frame from the storage unit 780, and performs image processing including the aforementioned de-flicker processing. In this way, the image processing unit 500 executes the aforementioned de-flicker processing in the RAW space, and generates an output frame in the RAW format.
  • the YUV conversion unit 710 performs YUV conversion processing on the output frame in the RAW format to generate YUV data.
  • the YUV conversion section 710 performs YUV conversion processing including gamma correction. In this way, after the control unit 110 performs the above-mentioned de-flicker processing in the RAW space, the YUV conversion unit 710 performs processing of converting the pixel value into a non-linear pixel value. Therefore, the pixel signal processed by the image processing unit 500 becomes a pixel signal having a linear intensity with respect to the pixel signal of the image sensor 120. Therefore, the flicker component can be reduced more appropriately.
  • FIG. 8 schematically shows another structure of the moving image data generating module of the control unit 110.
  • the YUV conversion section 810 performs conversion processing including gamma correction on the RAW data.
  • the image processing unit 800 performs image processing including de-flickering processing on the YUV data.
  • the image processing unit 800 performs the above-mentioned de-flickering processing on the Y signal, the U signal, and the V signal to generate YUV data.
  • the image processing unit 800 may generate YUV data by performing the above-mentioned de-flicker processing on only the Y signal.
  • the de-flicker processing can be performed based on the configuration shown in FIG. 8 to further reduce the amount of processing required for the de-flicker processing.
  • the YUV data input to the image processing unit 800 is data subjected to non-linear processing such as gamma correction. Therefore, there may be cases where the flicker component cannot be sufficiently reduced even if multiple frames are simply superimposed. Therefore, the image processing unit 800 performs a weighted addition on the input YUV data corresponding to a plurality of frames to perform superimposition.
  • the image processing unit 800 may also perform weighted addition corresponding to the intensity of the pixel signal to superimpose the YUV data of a plurality of frames.
  • the image processing unit 800 may determine the weighting coefficient for superimposing the YUV data of a plurality of frames in combination with the processing information of the gamma correction performed by the YUV conversion unit. As a result, it is possible to reduce the influence of the non-linear processing performed by the image processing unit 800 in the previous stage on the de-flicker processing.
  • FIG. 9 schematically shows the flow of other de-flickering processing performed by the control unit 110.
  • the main difference between the processing shown in FIG. 9 and the processing shown in FIG. 4 lies in the two-stage down-sampling of the frames constituting the moving image. Therefore, the description of the same processing as the processing described in connection with FIG. 4 and the like may be omitted in some cases.
  • the down-sampling unit 510 further down-samples the reduced frame 411 to generate a reduced frame 921.
  • the down-sampling unit 520 further down-samples the reduced frame 412 to generate a reduced frame 922.
  • the down-sampling unit 530 further down-samples the reduced frame 413 to generate a reduced frame 923.
  • the reduced frame 921, the reduced frame 922, and the reduced frame 923 are images having N/4 ⁇ M/4 pixels.
  • the control unit 110 uses the reduced frame 921, the reduced frame 922, and the reduced frame 923 to perform de-flicker processing. Specifically, as described in the related description of FIG. 3 and the like, the de-flicker image 932 is generated using a frame obtained by adding and averaging the reduced frame 921, the reduced frame 922, and the reduced frame 923. Then, the up-sampling unit 550 up-samples the de-flickering image 932 to generate an enlarged image 942.
  • the enlarged image 942 is an image having the number of N ⁇ M pixels.
  • the up-sampling unit 560 generates a low-frequency component image 952 by up-sampling the reduced frame 922.
  • the low-frequency component image 952 is an image having N ⁇ M pixels. It can be said that the low-frequency component image 952 is an image in which the high-frequency spatial frequency component in the current frame 402 is reduced.
  • the difference processing unit 570 generates a high-frequency component image 962 based on the difference between the current frame 402 and the low-frequency component image 952. It can be said that the high-frequency component image 962 is an image having N ⁇ M pixels and the high-frequency spatial frequency component is extracted from the current frame 402.
  • the addition unit 580 adds the high-frequency component image 962 and the enlarged image 942 to generate an output frame 970.
  • the output frame 970 is an image with the number of N ⁇ M pixels.
  • FIG. 9 shows a configuration that performs down-sampling in two stages, but a configuration that performs down-sampling in three or more stages may also be adopted.
  • the de-flicker processing is performed using an image superimposed on three frames when the brightness change period of the illumination is 1/100 second and the frame period is 1/60 second has been described.
  • the same de-flicker processing can be performed by appropriately adjusting the number of superimposed frames.
  • the amount of processing required for the de-flicker processing can be reduced. Moreover, it is possible to suppress the reduction of high-frequency spatial frequency components due to the de-flicker processing.
  • the camera 100 can be integrated into a mobile terminal such as a mobile phone.
  • the camera device 100 may be a surveillance camera.
  • the imaging device 100 may be a video camera or the like. Part or all of the functions of the imaging device 100 can be integrated into any device capable of capturing dynamic images.
  • the aforementioned imaging device 100 may be mounted on a mobile body.
  • the camera device 100 can be mounted on an unmanned aerial vehicle (UAV) as shown in FIG. 10.
  • UAV 10 may include a UAV main body 20, a universal joint 50, a plurality of camera devices 60, and the camera device 100.
  • the gimbal 50 and the camera device 100 are an example of a camera system.
  • UAV10 is an example of a moving body propelled by a propulsion unit.
  • the concept of moving objects refers to flying objects such as airplanes moving in the air, vehicles moving on the ground, ships moving on water, etc., in addition to UAVs.
  • the UAV main body 20 includes a plurality of rotors. Multiple rotors are an example of a propulsion section.
  • the UAV main body 20 makes the UAV 10 fly by controlling the rotation of a plurality of rotor wings.
  • the UAV main body 20 uses, for example, four rotors to fly the UAV 10.
  • the number of rotors is not limited to four.
  • UAV10 can also be a fixed-wing aircraft without rotors.
  • the imaging device 100 is an imaging camera that captures a subject included in a desired imaging range.
  • the universal joint 50 rotatably supports the imaging device 100.
  • the universal joint 50 is an example of a supporting mechanism.
  • the gimbal 50 uses an actuator to rotatably support the imaging device 100 with a pitch axis.
  • the universal joint 50 uses an actuator to further rotatably support the imaging device 100 around the roll axis and the yaw axis, respectively.
  • the gimbal 50 can change the posture of the imaging device 100 by rotating the imaging device 100 around at least one of the yaw axis, the pitch axis, and the roll axis.
  • the plurality of imaging devices 60 are sensing cameras that photograph the surroundings of the UAV 10 in order to control the flight of the UAV 10.
  • the two camera devices 60 can be installed on the nose of the UAV 10, that is, on the front side.
  • the other two camera devices 60 may be installed on the bottom surface of the UAV 10.
  • the two imaging devices 60 on the front side may be paired to function as a so-called stereo camera.
  • the two imaging devices 60 on the bottom side may also be paired to function as a stereo camera.
  • the three-dimensional spatial data around the UAV 10 can be generated based on the images taken by the plurality of camera devices 60.
  • the number of imaging devices 60 included in the UAV 10 is not limited to four. It is sufficient that the UAV 10 includes at least one camera device 60.
  • the UAV 10 may also include at least one camera 60 on the nose, tail, side, bottom, and top surfaces of the UAV 10, respectively.
  • the viewing angle that can be set in the imaging device 60 may be larger than the viewing angle that can be set in the imaging device 100.
  • the imaging device 60 may have a single focus lens or a fisheye lens.
  • the remote operation device 300 communicates with the UAV 10 to remotely operate the UAV 10.
  • the remote operation device 300 can wirelessly communicate with the UAV 10.
  • the remote operation device 300 transmits instruction information indicating various instructions related to the movement of the UAV 10 such as ascending, descending, accelerating, decelerating, forwarding, retreating, and rotating to the UAV 10.
  • the instruction information includes, for example, instruction information for raising the height of the UAV 10.
  • the indication information may indicate the height at which the UAV10 should be located.
  • the UAV 10 moves to be located at the height indicated by the instruction information received from the remote operation device 300.
  • the instruction information may include an ascending instruction to raise the UAV10. UAV10 rises while receiving the rise command. When the height of UAV10 has reached the upper limit height, even if the ascending instruction is accepted, the ascent of UAV10 can be restricted.
  • FIG. 11 shows an example of a computer 1200 that may fully or partially embody various aspects of the present invention.
  • the program installed on the computer 1200 can make the computer 1200 function as an operation associated with the device according to the embodiment of the present invention or one or more "parts" of the device.
  • a program installed on the computer 1200 can make the computer 1200 function as the control unit 110.
  • the program can enable the computer 1200 to perform related operations or related functions of one or more "parts".
  • This program enables the computer 1200 to execute the process or stages of the process involved in the embodiment of the present invention.
  • Such a program may be executed by the CPU 1212, so that the computer 1200 executes specified operations associated with some or all of the blocks in the flowcharts and block diagrams described in this specification.
  • the computer 1200 of this embodiment includes a CPU 1212 and a RAM 1214, which are connected to each other through a host controller 1210.
  • the computer 1200 further includes a communication interface 1222, an input/output unit, which is connected to the host controller 1210 through the input/output controller 1220.
  • the computer 1200 also includes a ROM 1230.
  • the CPU 1212 operates in accordance with programs stored in the ROM 1230 and RAM 1214 to control each unit.
  • the communication interface 1222 communicates with other electronic devices through the network.
  • the hard disk drive can store programs and data used by the CPU 1212 in the computer 1200.
  • the ROM 1230 stores therein a boot program executed by the computer 1200 during operation, and/or a program dependent on the hardware of the computer 1200.
  • the program is provided through a computer-readable recording medium such as CR-ROM, USB memory, or IC card, or a network.
  • the program is installed in RAM 1214 or ROM 1230 which is also an example of a computer-readable recording medium, and is executed by CPU 1212.
  • the information processing described in these programs is read by the computer 1200 and causes cooperation between the programs and the various types of hardware resources described above.
  • the apparatus or method can be constituted by realizing the operation or processing of information according to the use of the computer 1200.
  • the CPU 1212 can execute a communication program loaded in the RAM 1214, and based on the processing described in the communication program, instruct the communication interface 1222 to perform communication processing.
  • the communication interface 1222 reads the transmission data stored in the transmission buffer provided in a recording medium such as RAM 1214 or USB memory under the control of the CPU 1212, and sends the read transmission data to the network or receives the data from the network. The received data is written into the receiving buffer provided in the recording medium, etc.
  • the CPU 1212 can make the RAM 1214 read all or necessary parts of files or databases stored in an external recording medium such as a USB memory, and perform various types of processing on the data on the RAM 1214. Then, the CPU 1212 can write the processed data back to the external recording medium.
  • an external recording medium such as a USB memory
  • the CPU 1212 can perform various types of operations, information processing, condition determination, conditional transfer, unconditional transfer, and information retrieval/retrieval/information specified by the instruction sequence of the program described in various places in this disclosure. Replace various types of processing, and write the results back to RAM 1214.
  • the CPU 1212 can search for information in files, databases, and the like in the recording medium. For example, when multiple entries having the attribute value of the first attribute respectively associated with the attribute value of the second attribute are stored in the recording medium, the CPU 1212 may retrieve the attribute value of the specified first attribute from the multiple entries. The item that matches the condition is read, and the attribute value of the second attribute stored in the item is read, so as to obtain the attribute value of the second attribute that is associated with the first attribute that meets the preset condition.
  • the programs or software modules described above may be stored on the computer 1200 or on a computer-readable storage medium near the computer 1200.
  • a recording medium such as a hard disk or RAM provided in a server system connected to a dedicated communication network or the Internet can be used as a computer-readable storage medium so that the program can be provided to the computer 1200 via the network.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Picture Signal Circuits (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

一种图像处理装置,其包括电路,电路构成为:对动态图像进行降低闪烁的图像处理。电路构成为:通过减少构成动态图像的多个图像各自的像素数来生成多个缩小图像。电路构成为:使用将多个缩小图像相加后的图像,生成降低了多个图像所包括的第一图像中的闪烁分量的第二图像。电路构成为:通过增加第二图像的像素数来生成第三图像。电路构成为:生成第一图像与第一图像的低空间频率分量图像之间的差分图像。电路构成为:通过将第三图像与差分图像相加,生成与第一图像对应的输出图像。

Description

图像处理装置、摄像装置、移动体、程序以及方法 技术领域
本发明涉及一种图像处理装置、摄像装置、移动体、程序以及方法。
背景技术
专利文献1中记载了降低从摄像元件中获得的影像信号产生的荧光灯闪烁的技术。专利文献2中记载了检测周围的闪烁光源,并基于检测出的闪烁周期和相位进行降低闪烁影响的摄像。
[现有技术文献]
[专利文献]
[专利文献1]日本专利特开2004-222228号公报
[专利文献2]日本专利特开2016-86206号公报
发明内容
本发明的第一方面所涉及的图像处理装置包括电路,电路构成为:对动态图像进行降低闪烁的图像处理。电路构成为:通过减少构成动态图像的多个图像各自的像素数来生成多个缩小图像。电路构成为:使用将多个缩小图像相加后的图像,生成降低了多个图像所包括的第一图像中的闪烁分量的第二图像。电路构成为:通过增加第二图像的像素数,生成第三图像。电路构成为:生成第一图像与第一图像的低空间频率分量图像之间的差分图像。电路构成为:通过将第三图像与差分图像相加,生成与第一图像对应的输出图像。
电路可以构成为:增加通过减少第一图像的像素数而生成的第一缩小图像的像素数,从而生成第三图像。
电路可以构成为:当从动态图像检测的闪烁分量的大小超过预设值时,进行降低闪烁的图像处理。
电路可以构成为:从多个缩小图像中检测闪烁分量。电路可以构成为:当从多个缩小图像中检测出的闪烁分量的大小超过预设值时,对多个图像的每个图像进行降低闪烁的图像处理。
电路可以构成为:对检测出大于预设值的高频分量的区域进行降低闪烁的图像处理。
电路可以构成为:当动态图像中的运动量低于预设值时,进行降低闪烁的图像处理。
电路可以构成为:当对动态图像实施伽玛校正时,使用将多个缩小图像以相互不同的权 重相加得到的图像来生成第二图像。
电路可以构成为:当动态图像的颜色空间形式为YUV形式时,仅对动态图像的Y信号进行降低闪烁的图像处理。
本发明的第二方面所涉及的摄像装置包括上述图像处理装置和生成动态图像的图像传感器。
本发明的第三方面所涉及的移动体包括上述摄像装置并进行移动。
移动体可以是无人驾驶航空器。
本发明的第四方面所涉及的程序使计算机对动态图像进行降低闪烁的图像处理。程序使计算机通过减少构成动态图像的多个图像各自的像素数而生成多个缩小图像。程序使计算机使用将多个缩小图像相加后的图像,生成降低了多个图像所包括的第一图像中的闪烁分量的第二图像。程序使计算机通过增加第二图像的像素数来生成第三图像。程序使计算机生成第一图像与第一图像的低空间频率分量图像之间的差分图像。程序使计算机通过将第三图像与差分图像相加来生成与第一图像对应的输出图像。程序可存储于非临时性存储介质内。
本发明的第五方面所涉及的方法对动态图像进行降低闪烁的图像处理。方法包括通过减少构成动态图像的多个图像各自的像素数来生成多个缩小图像的阶段。方法包括使用将多个缩小图像相加后的图像,生成降低了多个图像所包括的第一图像中的闪烁分量的第二图像的阶段。方法包括通过增加第二图像的像素数来生成第三图像的阶段。方法包括生成第一图像与第一图像的低空间频率分量图像之间的差分图像的阶段。方法包括通过将第三图像与差分图像相加,生成与第一图像对应的输出图像的阶段。
根据本发明的上述方面,可以减少用于降低动态图像中的闪烁分量的处理量。
此外,上述发明内容未列举本发明的必要的全部特征。此外,这些特征组的子组合也可以构成发明。
附图说明
图1是示出本实施方式所涉及的摄像装置100的外观立体图的一个示例的图。
图2是示出本实施方式所涉及的摄像装置100的功能块的图。
图3示意性地示出照明的亮度变化和曝光时段的关系。
图4示意性地示出由控制部110进行的去闪烁处理的流程。
图5示意性地示出执行控制部110中的去闪烁处理所涉及的图像处理的图像处理部500的模块构成。
图6表示示出图像处理部500执行的处理的流程的流程图。
图7示意性地示出控制部110的动态图像数据生成模块的构成。
图8示意性地示出控制部110的动态图像数据生成模块的其他构成。
图9示意性地示出控制部110执行的其他去闪烁处理的流程。
图10示出无人驾驶航空器(UAV)的一个示例。
图11示出可整体或部分地体现本发明的多个方面的计算机1200的一个示例。
具体实施方式
以下,通过发明的实施方式来说明本发明,但是以下的实施方式并不限定权利要求书所涉及的发明。此外,实施方式中所说明的所有特征组合对于发明的解决方案未必是必须的。对本领域普通技术人员来说,显然可以对以下实施方式加以各种变更或改良。从权利要求书的描述显而易见的是,加以了这样的变更或改良的方式都可包含在本发明的技术范围之内。
权利要求书、说明书、说明书附图以及说明书摘要中包含作为著作权所保护对象的事项。任何人只要如专利局的文档或者记录所表示的那样进行这些文件的复制,著作权人则不会提出异议。但是,在除此以外的情况下,保留一切的著作权。
本发明的各种实施方式可参照流程图及框图来描述,这里,方框可表示(1)执行操作的过程的阶段或者(2)具有执行操作的作用的装置的“部”。特定的阶段和“部”可以通过可编程电路和/或处理器来实现。专用电路可以包括数字和/或模拟硬件电路。可以包括集成电路(IC)和/或分立电路。可编程电路可以包括可重构硬件电路。可重构硬件电路可以包括逻辑与、逻辑或、逻辑异或、逻辑与非、逻辑或非、及其它逻辑操作、触发器、寄存器、现场可编程门阵列(FPGA)、可编程逻辑阵列(PLA)等存储器元件等。
计算机可读介质可以包括可以对由适宜的设备执行的指令进行存储的任意有形设备。其结果是,其上存储有指令的计算机可读介质包括一种包括指令的产品,该指令可被执行以创建用于执行流程图或框图所指定的操作的手段。作为计算机可读介质的示例,可以包括电子存储介质、磁存储介质、光学存储介质、电磁存储介质、半导体存储介质等。作为计算机可读介质的更具体的示例,可以包括软盘(注册商标)、软磁盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或者闪存)、电可擦可编程只读存储器(EEPROM)、静态随机存取存储器(SRAM)、光盘只读存储器(CD-ROM)、数字多用途光盘(DVD)、蓝光(注册商标)光盘、记忆棒、集成电路卡等。
计算机可读指令可以包括由一种或多种编程语言的任意组合描述的源代码或者目标代码中的任意一个。源代码或者目标代码包括传统的程序式编程语言。传统的程序式编程语言可以为汇编指令、指令集架构(ISA)指令、机器指令、与机器相关的指令、微代码、固件指令、 状态设置数据、或者Smalltalk(注册商标)、JAVA(注册商标)、C++等面向对象编程语言以及“C”编程语言或者类似的编程语言。计算机可读指令可以在本地或者经由局域网(LAN)、互联网等广域网(WAN)提供给通用计算机、专用计算机或者其它可编程数据处理装置的处理器或可编程电路。处理器或可编程电路可以执行计算机可读指令,以创建用于执行流程图或框图所指定操作的手段。处理器的示例包括计算机处理器、处理单元、微处理器、数字信号处理器、控制器、微控制器等。
图1是示出本实施方式所涉及的摄像装置100的外观立体图的一个示例的图。图2是示出本实施方式所涉及的摄像装置100的功能块的图。
摄像装置100包括摄像部102及镜头部200。摄像部102包括图像传感器120、控制部110、存储器130、指示部162、显示部160以及通信部170。
图像传感器120可以由CCD或CMOS构成。图像传感器120通过镜头部200所包括的镜头210接收光。图像传感器120将通过镜头210成像的光学图像的图像数据输出至控制部110。
控制部110可以由CPU或MPU等微处理器、MCU等微控制器等构成。存储器130可以是计算机可读记录介质,也可以包括诸如SRAM、DRAM、EPROM、EEPROM和USB存储器等闪存中的至少一种。控制部110对应于电路。存储器130储存控制部110对图像传感器120等进行控制所需的程序等。存储器130可以设置于摄像装置100的壳体内部。存储器130可以设置成可从摄像装置100的壳体上拆卸下来。
指示部162是从用户处接收对摄像装置100的指示的用户界面。显示部160显示由图像传感器120摄像而由控制部110处理的图像、摄像装置100的各种设定信息等。显示部160可以由触控面板组成。
控制部110对镜头部200及图像传感器120进行控制。例如,控制部110控制镜头210的焦点的位置和焦距。控制部110基于表示用户的指示的信息,将控制指令输出到镜头部200所包括的镜头控制部220,从而对镜头部200进行控制。
镜头部200包括1个以上的镜头210、镜头驱动部212、镜头控制部220以及存储器222。在本实施方式中,将1个以上的镜头210统称为“镜头210”。镜头210可以包括聚焦镜头和变焦镜头。镜头210包括的镜头中的至少一部分或全部被布置为可沿着镜头210的光轴移动。镜头部200可以是可拆卸地设置在摄像部102上的可更换镜头。
镜头驱动部212使镜头210中的至少一部分或全部沿着镜头210的光轴移动。镜头控制部220根据来自摄像部102的镜头控制指令,驱动镜头驱动部212,使镜头210整体或镜头210所包含的变焦镜头或聚焦镜头沿光轴方向移动,从而执行变焦操作或聚焦操作中的至少 一个。镜头控制指令例如是变焦控制指令以及聚焦控制指令等。
镜头驱动部212可包括使多个镜头210的至少一部分或全部沿光轴方向移动的音圈电机(VCM)。镜头驱动部212可包括DC电机、空心杯电机或超声波电机等电动机。镜头驱动部212可将来自电动机的动力经由凸轮环、导轴等的机构部件传递给多个镜头210的至少一部分或全部,使镜头210的至少一部分或全部沿光轴移动。
存储器222存储通过镜头驱动部212进行移动的聚焦镜头和变焦镜头用的控制值。存储器222可以包括SRAM、DRAM、EPROM、EEPROM及USB存储器等闪存中的至少一个。
控制部110基于表示通过指示部162等获取的用户的指示的信息,通过向图像传感器120输出控制命令,对图像传感器120执行包含摄像动作的控制在内的控制。控制部110获取由图像传感器120拍摄的图像。控制部110对从图像传感器120获取的图像实施图像处理并存储在存储器130中。
通信部170负责和外部的通信。通信部170通过通信网络将控制部110生成的信息发送至外部。通信部170通过通信网络将从外部接收的信息提供给控制部110。
对控制部110执行的图像处理的概要进行说明。控制部110对动态图像进行降低闪烁的图像处理。控制部110通过减少构成动态图像的多个图像各自的像素数来生成多个缩小图像。控制部110使用将多个缩小图像相加后的图像,生成降低了多个图像所包括的第一图像的闪烁分量的第二图像。控制部110通过增加第二图像的像素数,生成第三图像。控制部110生成第一图像与第一图像的低空间频率分量图像之间的差分图像。控制部110通过将第三图像与差分图像相加,生成与第一图像对应的输出图像。
控制部110可以增加通过减少第一图像的像素数而生成的第一缩小图像的像素数,以此生成第三图像。控制部110可以通过上采样第一缩小图像来生成第三图像。另外,控制部110也可以通过从第一图像中提取低空间频率分量来生成低空间频率分量图像。
当从动态图像中检测的闪烁分量的大小超过预设值时,控制部110进行降低闪烁的图像处理。控制部110可以从多个缩小图像中检测出闪烁分量。当从多个缩小图像中检测出的闪烁分量的大小超过预设值时,控制部110可以对多个图像的每个图像进行降低闪烁的图像处理。
控制部110可以对检测出大于预设值的高频分量的区域进行降低闪烁的图像处理。当动态图像中的运动量低于预设值时,控制部110可以进行降低闪烁的图像处理。
当对动态图像实施伽玛校正时,控制部110可以使用将多个缩小图像以相互不同的权重相加得到的图像来生成第二图像。当动态图像的颜色空间形式为YUV形式时,电路可以仅对动态图像的Y信号进行降低闪烁的图像处理。
图3示意性地示出照明的亮度变化和曝光时段的关系。若以电源频率为50Hz的电力来驱动非逆变器式的照明,则照明的亮度如图3所示,以1/100秒的周期发生变化。图3中示出摄像部102根据垂直同步信号,以1/60秒的帧时段进行动态图像摄影时的曝光时段。如图3所示,在用于拍摄帧I1、帧I2、帧I3的连续的帧时段之间,帧的曝光时间相对于照明的亮度变化是有偏差的。由此,连续的帧时段之间一个曝光时段中的照明光的曝光量是变化的。因此,在动态图像中产生了发生明暗变化的现象(闪烁现象)。
如图3所示,当照明的亮度变化周期为1/100秒、帧时段为1/60秒时,相对于照明的亮度变化,曝光时段的开始时间的相位每三帧变为相同相位。例如,相对于照明的亮度变化,拍摄帧I1的曝光时段的开始时间的相位与拍摄三帧后的帧I4的曝光时段的开始时间的相位相同。因此,照明光的曝光量每三帧进行重复。
当以全局快门方式拍摄一帧时,闪烁产生的明暗变化仅出现在帧间。因此,在图3的示例中,通过每连续三帧地对帧进行叠加平均化,可以降低由于照明光产生的明暗变化。例如,通过生成对帧I1、帧I2、帧I3进行叠加平均化后的输出帧,并生成对帧I4、帧I5、帧I6进行叠加平均化后的输出帧,从而能够进行降低在输出帧间由于照明光产生的明暗变化的去闪烁处理。另外,当以卷帘快门方式拍摄一帧时,在通过叠加三帧而得到的输出帧内,可能会发生条纹样式的亮度变化。这时,如同专利文献1所记载地,可以从三帧的叠加图像中提取亮度或颜色分量的光谱,基于提取的光谱来确定闪烁分量的振幅及相位,以此来进行降低帧中的闪烁分量的去闪烁处理。
但是,作为去闪烁处理对象的帧的像素数越多,执行包括多个帧的叠加处理在内的去闪烁处理所需的处理量会越增大。而且,由于通过帧的叠加处理而高频空间频率分量变小,因此作为输出帧中的图像的分辨率可能会降低。对此,在本实施方式的摄像装置100中,控制部110对由图像传感器120生成的多个帧进行缩小后得到的缩小帧进行去闪烁处理。由此,可以抑制执行去闪烁处理所需的处理量的增大。此外,控制部110从帧中提取高频空间频率分量,使用缩小帧图像和由图像传感器120生成的高频空间频率分量,生成输出帧。由此,可以抑制由于去闪烁处理造成的分辨率降低。
图4示意性地示出由控制部110进行的去闪烁处理的流程。前一帧401、当前帧402以及下一帧403为构成动态图像的图像。前一帧401、当前帧402以及下一帧403为由图像传感器120连续生成并输入至控制部110的去闪烁处理部的帧。前一帧401、当前帧402以及下一帧403为具有N×M个像素数的图像。另外,“当前帧”的名称是为了清楚地展示在连续三个帧中作为基准的帧的名称,并不意味着“当前”的意思。而且,“前一帧”及“下一帧”的名称是为了清楚地展示对“当前帧”的拍摄顺序的名称。
控制部110通过对前一帧401进行下采样,生成缩小帧411。控制部110通过对当前帧402进行下采样,生成缩小帧412。此外,控制部110通过对下一帧403进行下采样,生成缩小帧413。缩小帧411、缩小帧412以及缩小帧413为具有N/2×M/2个像素数的图像。
控制部110使用缩小帧411、缩小帧412以及缩小帧413进行去闪烁处理。具体而言,如图3等的关联说明所述,使用通过对缩小帧411、缩小帧412及缩小帧413进行相加平均化而得到的帧来生成去闪烁图像432。并且,控制部110通过对去闪烁图像432进行上采样,生成放大图像442。放大图像442为具有N×M像素数的图像。
控制部110通过对缩小帧412进行上采样,生成低频分量图像452。低频分量图像452为具有N×M个像素数的图像。低频分量图像452为对当前帧402进行下采样处理和上采样处理而得到的图像。因此,可以说低频分量图像452为降低了当前帧402中的高频空间频率分量的图像。控制部110根据当前帧402与低频分量图像452的差分,生成高频分量图像462。可以说高频分量图像462为具有N×M像素数并从当前帧402提取了高频空间频率分量的图像。
控制部110通过将高频分量图像462与放大图像442相加,生成输出帧470。输出帧470为具有N×M像素数的图像。如图4等的关联说明所述,由于控制部110对缩小帧411、缩小帧412以及缩小帧413进行去闪烁处理,因此可以抑制去闪烁处理所需的处理量的增大。另外,由于控制部110将从当前帧402中提取的高频分量图像与放大图像442相加,因此可以抑制由于帧的叠加产生的去闪烁处理而产生的高频分量的降低量。
图5示意性地示出执行控制部110中的去闪烁处理所涉及的图像处理的图像处理部500的模块构成。下采样部510通过对前一帧401进行下采样,生成缩小帧411。下采样部520通过对当前帧402进行下采样,生成缩小帧412。下采样部530通过对下一帧403进行下采样,生成缩小帧413。下采样为降低像素数的处理。下采样可以举例如计算关注像素的周边像素的加权平均的处理、像素的稀疏处理等。
去闪烁部540通过对缩小帧411、缩小帧412以及缩小帧413进行去闪烁处理,生成去闪烁图像432。上采样部550通过对去闪烁图像432进行上采样,生成放大图像442。上采样为增加像素数的处理。上采样可以举例如使用相邻像素的像素值的插值处理等。
上采样部560通过对由下采样部520生成的缩小帧412进行上采样,生成低频分量图像452。差分处理部570通过每像素地从当前帧402中减去上采样部560生成的低频分量图像452,生成高频分量图像462。加法部580通过每像素地将通过差分处理部570生成的高频分量图像462与通过上采样部550生成的放大图像442相加,生成输出帧470。图像处理部500通过对连续的三个帧重复上述处理,生成降低了动态图像中的闪烁分量的多个输出帧。
图6表示示出图像处理部500执行处理的流程的流程图。在S600中,下采样部510、下采样部520以及下采样部530通过分别对前一帧401、当前帧402以及下一帧403进行下采样,生成缩小帧411、缩小帧412以及缩小帧413。
在S600中,图像处理部500判断是否产生闪烁。例如,图像处理部500从缩小帧411、缩小帧412以及缩小帧413中提取闪烁分量。具体而言,图像处理部500使用将缩小帧411、缩小帧412以及缩小帧413叠加后的图像提取光谱,以此提取闪烁分量。当提取的闪烁分量的振幅超过预设值时,图像处理部500判断产生闪烁;当闪烁分量的振幅小于等于预设值时,图像处理部500判断未产生闪烁。另外,图像处理部500也可以使用S600中生成的缩小帧来进行上述专利文献2中记载的闪烁检测处理。由此,可以减少闪烁检测处理所需的处理量。
当判断出未产生闪烁时,在S680中,图像处理部500输出当前帧作为输出帧。当判断出产生闪烁时,在S620中,图像处理部500使用多个缩小帧411、缩小帧412以及缩小帧413执行去闪烁处理,生成去闪烁图像432。在S630,上采样部550通过对去闪烁图像432进行上采样来增加像素数,从而生成放大图像442。
在S640中,上采样部560通过对与当前帧402对应的缩小帧412进行上采样来增加像素数,从而生成低频分量图像452。在S650中,差分处理部570通过从当前帧402中减去低频分量图像452,生成高频分量图像462。在S660中,加法部580通过将在S650中生成的高频分量图像462与在S630中生成的放大图像442相加,生成输出帧470。在S670中,输出S660中生成的输出帧470,作为构成降低了闪烁分量的动态图像的帧。
另外,在本流程图中也可省略闪烁有无的检测。例如,图像处理部500可以不进行S610中闪烁的检测处理,而执行从S620至S670的处理。此外,当动态图像中的运动量大于等于预设值时,图像处理部500可以不进行上述去闪烁处理方法。动态图像中的运动量可以为摄像装置100的全局运动。动态图像中的运动量可以为摄像装置100的平移、变焦、旋转等。图像处理部500可以基于设置于摄像装置100的陀螺仪传感器的检测值来获取动态图像中的运动量。图像处理部500可以从构成动态图像的多个帧中检测动态图像中的运动量。图像处理部500也可以从多个缩小帧中检测动态图像中的运动量。图像处理部500可以仅对部分区域进行去闪烁处理。在进行去闪烁处理时,图像处理部500可以选择比预设频率高的空间频率分量大于预设值的部分区域,对选择的部分区域进行去闪烁处理,对选择的部分区域以外的区域不进行上述去闪烁处理。图像处理部500也可以无需判断是否对图像的整个区域进行去闪烁处理,当比预设频率高的空间频率分量大于预设值的部分区域存在时,对该部分区域进行上述去闪烁处理。
图7示意性地示出控制部110的动态图像数据生成模块的构成。RAW数据表示在图像传 感器120中生成并依次输入至控制部110的帧的图像数据。RAW数据具有按每像素预设颜色分量的亮度值。存储部780保存图像处理部500中的图像处理所需的图像数据。图像处理部500将依次输入的RAW数据进行缩小并保存于存储部780中。当从与下一帧403对应的RAW数据中生成缩小帧413时,图像处理部500从存储部780获取前一帧和当前帧的缩小帧,来进行包括上述去闪烁处理的图像处理。这样,图像处理部500在RAW空间中执行上述去闪烁处理,生成RAW形式的输出帧。
YUV转换部710对RAW形式的输出帧进行YUV变换处理,生成YUV数据。YUV转换部710进行包括伽玛校正的YUV转换处理。这样,控制部110在RAW空间中执行完上述去闪烁处理在后,再在YUV转换部710中进行将像素值转换为非线性像素值的处理。从而,图像处理部500处理的像素信号成为相对于图像传感器120的像素信号具有线性强度的像素信号。因此,能够更恰当地降低闪烁分量。
图8示意性地示出控制部110的动态图像数据生成模块的其他结构。YUV转换部810对RAW数据进行包括伽玛校正的转换处理。图像处理部800对YUV数据进行包括去闪烁处理的图像处理。图像处理部800通过分别对Y信号、U信号以及V信号进行上述去闪烁处理,从而生成YUV数据。图像处理部800也可以通过仅对Y信号进行上述去闪烁处理,从而生成YUV数据。当YUV数据为4:2:0格式时,可以基于图8所示的构成来进行去闪烁处理,以此更多地减少去闪烁处理所需的处理量。
另外,在图8所示的构成中,输入图像处理部800的YUV数据成为实施了伽玛校正等非线性处理的数据。因此,可能存在即使单纯地叠加多个帧也无法充分降低闪烁分量的情况。因此,图像处理部800将输入的与多个帧对应的YUV数据进行加权相加,以此来进行叠加。图像处理部800也可以进行与像素信号的强度对应的加权相加,以此来对多个帧的YUV数据进行叠加。图像处理部800可以结合通过YUV转换部执行的伽玛校正的处理信息,确定对多个帧的YUV数据进行叠加的加权系数。由此,可以降低图像处理部800在前面阶段执行的非线性处理对去闪烁处理产生的影响。
图9示意性地示出控制部110执行其他去闪烁处理的流程。图9所示的处理与图4所示的处理的差异主要在于对构成动态图像的帧进行两阶段的下采样这一点。因此,有时省略说明与图4等关联说明的处理相同的处理。
下采样部510通过对缩小帧411进一步地进行下采样,生成缩小帧921。下采样部520通过对缩小帧412进一步地进行下采样,生成缩小帧922。下采样部530通过对缩小帧413进一步地进行下采样,生成缩小帧923。缩小帧921、缩小帧922以及缩小帧923为具有N/4×M/4个像素数的图像。
控制部110使用缩小帧921、缩小帧922以及缩小帧923进行去闪烁处理。具体而言,如图3等的关联说明所述,使用通过对缩小帧921、缩小帧922以及缩小帧923进行相加平均化而得到的帧来生成去闪烁图像932。然后,上采样部550通过对去闪烁图像932进行上采样,生成放大图像942。放大图像942为具有N×M像素数的图像。
上采样部560通过对缩小帧922进行上采样,生成低频分量图像952。低频分量图像952为具有N×M个像素数的图像。可以说低频分量图像952为降低了当前帧402中的高频空间频率分量的图像。差分处理部570根据当前帧402与低频分量图像952的差分,生成高频分量图像962。可以说高频分量图像962为具有N×M像素数并从当前帧402提取了高频空间频率分量的图像。
加法部580通过将高频分量图像962与放大图像942相加,生成输出帧970。输出帧970为具有N×M像素数的图像。根据图9所示的构成,由于对缩小帧921、缩小帧922以及缩小帧923进行去闪烁处理,因此可以进一步抑制去闪烁处理所需的处理量的增大。另外,图9为进行两阶段的下采样的构成,但也可以采用进行三阶段以上的下采样的构成。
在以上的说明中,对在照明的亮度变化周期为1/100秒、帧时段为1/60秒的情况下使用叠加了三个帧的图像进行去闪烁处理的情形进行了说明。在照明的亮度变化周期和帧时段为其他组合的情况下,可以通过适当调整叠加的帧的数量来进行同样的去闪烁处理。
如上所述,根据控制部110进行的图像处理,可以减少去闪烁处理所需的处理量。而且,可以抑制因去闪烁处理造成的高频空间频率分量降低。
摄像装置100的部分或者全部功能可以整合到移动电话等移动终端。摄像装置100可以为监控摄像机。摄像装置100可以为摄像机等。摄像装置100的部分或者全部功能可以整合到能够拍摄动态图像的任意装置中。
上述摄像装置100可以搭载于移动体上。摄像装置100可以搭载于如图10所示的无人驾驶航空器(UAV)上。UAV10可以包括UAV主体20、万向节50、多个摄像装置60,以及摄像装置100。万向节50及摄像装置100为摄像系统的一个示例。UAV10为由推进部推进的移动体的一个示例。移动体的概念是指除UAV之外,包括在空中移动的飞机等飞行体、在地面上移动的车辆、在水上移动的船舶等。
UAV主体20包括多个旋翼。多个旋翼为推进部的一个示例。UAV主体20通过控制多个旋翼的旋转而使UAV10飞行。UAV主体20使用例如四个旋翼来使UAV10飞行。旋翼的数量不限于四个。另外,UAV10也可以是没有旋翼的固定翼机。
摄像装置100是对包含在所期望的摄像范围内的被摄体进行摄像的摄像用相机。万向节50可旋转地支撑摄像装置100。万向节50为支撑机构的一个示例。例如,万向节50使用致 动器以俯仰轴可旋转地支撑摄像装置100。万向节50使用致动器进一步分别以滚转轴和偏航轴为中心可旋转地支撑摄像装置100。万向节50可通过使摄像装置100以偏航轴、俯仰轴以及滚转轴中的至少1个为中心旋转,来改变摄像装置100的姿势。
多个摄像装置60是为了控制UAV10的飞行而对UAV10的周围进行拍摄的传感用相机。两个摄像装置60可以设置于UAV10的机头、即正面。并且,其它两个摄像装置60可以设置于UAV10的底面。正面侧的两个摄像装置60可以成对,起到所谓的立体相机的作用。底面侧的两个摄像装置60也可以成对,起到立体相机的作用。可以根据由多个摄像装置60所拍摄的图像来生成UAV10周围的三维空间数据。UAV10所包括的摄像装置60的数量不限于四个。UAV10包括至少一个摄像装置60即可。UAV10也可以在UAV10的机头、机尾、侧面、底面及顶面分别包括至少一个摄像装置60。摄像装置60中可设定的视角可大于摄像装置100中可设定的视角。摄像装置60也可以具有单焦点镜头或鱼眼镜头。
远程操作装置300与UAV10通信,以远程操作UAV10。远程操作装置300可以与UAV10进行无线通信。远程操作装置300向UAV10发送表示上升、下降、加速、减速、前进、后退、旋转等与UAV10的移动有关的各种指令的指示信息。指示信息包括例如使UAV10的高度上升的指示信息。指示信息可以表示UAV10应该位于的高度。UAV10进行移动,以位于从远程操作装置300接收的指示信息所表示的高度。指示信息可以包括使UAV10上升的上升指令。UAV10在接受上升指令的期间上升。在UAV10的高度已达到上限高度时,即使接受上升指令,也可以限制UAV10上升。
图11示出可全部或部分地体现本发明的多个方面的计算机1200的一个示例。安装在计算机1200上的程序能够使计算机1200作为与本发明的实施方式所涉及的装置相关联的操作或者该装置的一个或多个“部”而起作用。例如,安装在计算机1200上的程序能够使计算机1200作为控制部110而起作用。或者,该程序能够使计算机1200执行相关操作或者相关一个或多个“部”的功能。该程序能够使计算机1200执行本发明的实施方式所涉及的过程或者该过程的阶段。这种程序可以由CPU1212执行,以使计算机1200执行与本说明书所述的流程图及框图中的一些或者全部方框相关联的指定操作。
本实施方式的计算机1200包括CPU1212以及RAM1214,它们通过主机控制器1210相互连接。计算机1200还包括通信接口1222、输入/输出单元,它们通过输入/输出控制器1220与主机控制器1210连接。计算机1200还包括ROM1230。CPU1212按照ROM1230及RAM1214内存储的程序而工作,从而控制各单元。
通信接口1222通过网络与其他电子装置通信。硬盘驱动器可以存储计算机1200内的CPU1212所使用的程序及数据。ROM1230在其中存储运行时由计算机1200执行的引导程序 等、和/或依赖于计算机1200的硬件的程序。程序通过CR-ROM、USB存储器或IC卡之类的计算机可读记录介质或者网络来提供。程序安装在也作为计算机可读记录介质的示例的RAM1214或ROM1230中,并通过CPU1212执行。这些程序中记述的信息处理由计算机1200读取,并引起程序与上述各种类型的硬件资源之间的协作。可以通过根据计算机1200的使用而实现信息的操作或者处理来构成装置或方法。
例如,当在计算机1200和外部装置之间执行通信时,CPU1212可执行加载在RAM1214中的通信程序,并且基于通信程序中描述的处理,命令通信接口1222进行通信处理。通信接口1222在CPU1212的控制下,读取存储在RAM1214或USB存储器之类的记录介质内提供的发送缓冲区中的发送数据,并将读取的发送数据发送到网络,或者将从网络接收的接收数据写入记录介质内提供的接收缓冲区等中。
此外,CPU1212可以使RAM1214读取USB存储器等外部记录介质所存储的文件或数据库的全部或者需要的部分,并对RAM1214上的数据执行各种类型的处理。接着,CPU1212可以将处理过的数据写回到外部记录介质中。
可以将各种类型的程序、数据、表格及数据库之类的各种类型的信息存储在记录介质中,并接受信息处理。对于从RAM1214读取的数据,CPU1212可执行在本公开的各处描述的、包括由程序的指令序列指定的各种类型的操作、信息处理、条件判定、条件转移、无条件转移、信息的检索/替换等各种类型的处理,并将结果写回到RAM1214中。此外,CPU1212可以检索记录介质内的文件、数据库等中的信息。例如,在记录介质中存储具有分别与第二属性的属性值相关联的第一属性的属性值的多个条目时,CPU1212可以从该多个条目中检索出与指定第一属性的属性值的条件相匹配的条目,并读取该条目内存储的第二属性的属性值,从而获取与满足预设条件的第一属性相关联的第二属性的属性值。
以上描述的程序或者软件模块可以存储在计算机1200上或者计算机1200附近的计算机可读存储介质上。另外,连接到专用通信网络或因特网的服务器系统中提供的诸如硬盘或RAM之类的记录介质可以用作计算机可读存储介质,从而可以经由网络将程序提供给计算机1200。
以上使用实施方式对本发明进行了说明,但是本发明的技术范围并不限于上述实施方式所描述的范围。对本领域普通技术人员来说,显然可对上述实施方式加以各种变更或改良。从权利要求书的描述显而易见的是,加以了这样的变更或改良的方式都可包含在本发明的技术范围之内。
应该注意的是,权利要求书、说明书以及说明书附图中所示的装置、系统、程序以及方法中的动作、过程、步骤以及阶段等各项处理的执行顺序,只要没有特别明示“在…之前”、“事 先”等,且只要前面处理的输出并不用在后面的处理中,则可以任意顺序实现。关于权利要求书、说明书以及说明书附图中的操作流程,为方便起见而使用“首先”、“接着”等进行了说明,但并不意味着必须按照这样的顺序实施。
【符号说明】
10 UAV
20 UAV主体
50 万向节
60 摄像装置
100 摄像装置
102 摄像部
110 控制部
120 图像传感器
130 存储器
160 显示部
162 指示部
170 通信部
401 前一帧
402 当前帧
403 下一帧
411 缩小帧
412 缩小帧
413 缩小帧
432 去闪烁图像
442 放大图像
452 低频分量图像
462 高频分量图像
470 输出帧
500 图像处理部
510、520、530 下采样部
540 去闪烁部
550、560 上采样部
570 差分处理部
580 加法部
710 YUV转换部
780 存储部
800 图像处理部
810 YUV转换部
921、922、923 缩小帧
932 去闪烁图像
942 放大图像
952 低频分量图像
962 高频分量图像
970 输出帧
1200 计算机
1210 主机控制器
1212 CPU
1214 RAM
1220 输入/输出控制器
1222 通信接口
1230 ROM

Claims (13)

  1. 一种图像处理装置,其特征在于,包括构成为对动态图像进行降低闪烁的图像处理的电路,
    所述电路构成为:
    通过减少构成所述动态图像的多个图像的各自像素数来生成多个缩小图像;
    使用将所述多个缩小图像相加后的图像,生成降低了所述多个图像所包括的第一图像中的闪烁分量的第二图像;
    通过增加所述第二图像的像素数生成第三图像;
    生成所述第一图像与所述第一图像的低空间频率分量图像之间的差分图像;
    通过将所述第三图像和所述差分图像相加,生成与所述第一图像对应的输出图像。
  2. 根据权利要求1所述的图像处理装置,其特征在于,所述电路构成为:增加通过减少所述第一图像的像素数而生成的第一缩小图像的像素数,从而生成所述第三图像。
  3. 根据权利要求1或2所述的图像处理装置,其特征在于,所述电路构成为:当从所述动态图像检测的闪烁分量的大小超过预设值时,进行降低闪烁的图像处理。
  4. 根据权利要求3所述的图像处理装置,其特征在于,所述电路构成为:
    从所述多个缩小图像中检测闪烁分量;
    当从所述多个缩小图像中检测出的闪烁分量的大小超过预设值时,对所述多个图像的每个图像进行降低闪烁的图像处理。
  5. 根据权利要求1或2所述的图像处理装置,其特征在于,所述电路构成为:对检测出大于预设值的高频分量的区域进行降低所述闪烁的图像处理。
  6. 根据权利要求1或2所述的图像处理装置,其特征在于,所述电路构成为:当所述动态图像中的运动量低于预设值时,进行降低所述闪烁的图像处理。
  7. 根据权利要求1或2所述的图像处理装置,其特征在于,所述电路构成为:当对所述动态图像实施伽玛校正时,使用将所述多个缩小图像以相互不同的权重相加得到的图像来生成所述第二图像。
  8. 根据权利要求1或2所述的图像处理装置,其特征在于,所述电路构成为:当所述动态图像的颜色空间形式为YUV形式时,仅对所述动态图像的Y信号进行降低所述闪烁的图像处理。
  9. 一种摄像装置,其特征在于,包括根据权利要求1或2所述的图像处理装置;以及
    生成所述动态图像的图像传感器。
  10. 一种移动体,其特征在于,其搭载根据权利要求9所述的摄像装置并进行移动。
  11. 根据权利要求10所述的移动体,其特征在于,所述移动体是无人驾驶航空器。
  12. 一种用于使计算机对动态图像进行降低闪烁的图像处理的程序,其特征在于,
    所述程序使所述计算机:
    通过减少构成所述动态图像的多个图像各自的像素数来生成多个缩小图像;
    使用将所述多个缩小图像相加后的图像,生成降低了所述多个图像所包括的第一图像中的闪烁分量的第二图像;
    通过增加所述第二图像的像素数,生成第三图像;
    生成所述第一图像与所述第一图像的低空间频率分量图像之间的差分图像;
    通过将所述第三图像和所述差分图像相加,生成与所述第一图像对应的输出图像。
  13. 一种对动态图像进行降低闪烁的图像处理的方法,其特征在于,包括以下阶段:
    通过减少构成所述动态图像的多个图像各自的像素数来生成多个缩小图像;
    使用将所述多个缩小图像相加后的图像,生成降低了所述多个图像所包括的第一图像中的闪烁分量的第二图像;
    通过增加所述第二图像的像素数来生成第三图像;
    生成所述第一图像与所述第一图像的低空间频率分量图像之间的差分图像;
    通过将所述第三图像与所述差分图像相加,生成与所述第一图像对应的输出图像。
PCT/CN2021/093317 2020-05-20 2021-05-12 图像处理装置、摄像装置、移动体、程序以及方法 WO2021233177A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020088435A JP6932895B1 (ja) 2020-05-20 2020-05-20 画像処理装置、撮像装置、移動体、プログラム及び方法
JP2020-088435 2020-05-20

Publications (1)

Publication Number Publication Date
WO2021233177A1 true WO2021233177A1 (zh) 2021-11-25

Family

ID=77549962

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/093317 WO2021233177A1 (zh) 2020-05-20 2021-05-12 图像处理装置、摄像装置、移动体、程序以及方法

Country Status (2)

Country Link
JP (1) JP6932895B1 (zh)
WO (1) WO2021233177A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060232686A1 (en) * 2005-04-19 2006-10-19 Sony Corporation Flicker correction method and device, and imaging device
CN1874420A (zh) * 2006-06-13 2006-12-06 北京中星微电子有限公司 图像序列帧间闪烁噪声消除方法
CN1874421A (zh) * 2006-06-13 2006-12-06 北京中星微电子有限公司 一种图像序列帧间闪烁噪声消除装置
US20110102479A1 (en) * 2009-10-29 2011-05-05 Canon Kabushiki Kaisha Image processing apparatus, image processing method and storage medium
CN102113308A (zh) * 2009-06-04 2011-06-29 松下电器产业株式会社 图像处理装置、图像处理方法、程序、记录介质以及集成电路

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4586052B2 (ja) * 2007-08-08 2010-11-24 キヤノン株式会社 画像処理装置及びその制御方法
JP2011254404A (ja) * 2010-06-03 2011-12-15 Canon Inc 画像処理装置およびその制御方法
JP6276639B2 (ja) * 2014-04-22 2018-02-07 日本放送協会 ビデオカメラ装置、映像信号の処理方法および映像信号処理装置
JP7047766B2 (ja) * 2016-10-27 2022-04-05 ソニーグループ株式会社 映像信号処理装置、撮像装置および撮像装置におけるフリッカ確認方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060232686A1 (en) * 2005-04-19 2006-10-19 Sony Corporation Flicker correction method and device, and imaging device
CN1874420A (zh) * 2006-06-13 2006-12-06 北京中星微电子有限公司 图像序列帧间闪烁噪声消除方法
CN1874421A (zh) * 2006-06-13 2006-12-06 北京中星微电子有限公司 一种图像序列帧间闪烁噪声消除装置
CN102113308A (zh) * 2009-06-04 2011-06-29 松下电器产业株式会社 图像处理装置、图像处理方法、程序、记录介质以及集成电路
US20110102479A1 (en) * 2009-10-29 2011-05-05 Canon Kabushiki Kaisha Image processing apparatus, image processing method and storage medium

Also Published As

Publication number Publication date
JP2021182727A (ja) 2021-11-25
JP6932895B1 (ja) 2021-09-08

Similar Documents

Publication Publication Date Title
WO2018053825A1 (zh) 对焦方法和装置、图像拍摄方法和装置及摄像系统
JP6496955B1 (ja) 制御装置、システム、制御方法、及びプログラム
JP2019216343A (ja) 決定装置、移動体、決定方法、及びプログラム
JP6596741B2 (ja) 生成装置、生成システム、撮像システム、移動体、生成方法、及びプログラム
WO2021233177A1 (zh) 图像处理装置、摄像装置、移动体、程序以及方法
WO2019174343A1 (zh) 活动体检测装置、控制装置、移动体、活动体检测方法及程序
WO2021031840A1 (zh) 装置、摄像装置、移动体、方法以及程序
JP6572500B1 (ja) 画像処理装置、撮像装置、移動体、画像処理方法、及びプログラム
WO2019080805A1 (zh) 控制装置、摄像装置、飞行体、控制方法以及程序
WO2018185940A1 (ja) 撮像制御装置、撮像装置、撮像システム、移動体、撮像制御方法、及びプログラム
WO2020216037A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
WO2021013096A1 (zh) 控制装置、摄像系统、移动体、控制方法以及程序
WO2020156085A1 (zh) 图像处理装置、摄像装置、无人驾驶航空器、图像处理方法以及程序
WO2018163300A1 (ja) 制御装置、撮像装置、撮像システム、移動体、制御方法、及びプログラム
WO2020244440A1 (zh) 控制装置、摄像装置、摄像系统、控制方法以及程序
JP6961888B1 (ja) 装置、撮像装置、移動体、プログラム及び方法
WO2019223614A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
WO2021249245A1 (zh) 装置、摄像装置、摄像系统及移动体
JP2021129141A (ja) 制御装置、撮像装置、制御方法、及びプログラム
WO2021143425A1 (zh) 控制装置、摄像装置、移动体、控制方法以及程序
WO2021052216A1 (zh) 控制装置、摄像装置、控制方法以及程序
WO2022001561A1 (zh) 控制装置、摄像装置、控制方法以及程序
JP6818987B1 (ja) 画像処理装置、撮像装置、移動体、画像処理方法、及びプログラム
JP6569157B1 (ja) 制御装置、撮像装置、移動体、制御方法、及びプログラム
WO2021204020A1 (zh) 装置、摄像装置、摄像系统、移动体、方法以及程序

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21808102

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21808102

Country of ref document: EP

Kind code of ref document: A1