WO2020026595A1 - Dispositif de traitement de signaux, dispositif d'imagerie, et procédé de traitement de signaux - Google Patents

Dispositif de traitement de signaux, dispositif d'imagerie, et procédé de traitement de signaux Download PDF

Info

Publication number
WO2020026595A1
WO2020026595A1 PCT/JP2019/023003 JP2019023003W WO2020026595A1 WO 2020026595 A1 WO2020026595 A1 WO 2020026595A1 JP 2019023003 W JP2019023003 W JP 2019023003W WO 2020026595 A1 WO2020026595 A1 WO 2020026595A1
Authority
WO
WIPO (PCT)
Prior art keywords
reference value
motion
frame
correction gain
determination unit
Prior art date
Application number
PCT/JP2019/023003
Other languages
English (en)
Japanese (ja)
Inventor
剛 渡辺
浩人 數井
Original Assignee
ソニー株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニー株式会社 filed Critical ソニー株式会社
Publication of WO2020026595A1 publication Critical patent/WO2020026595A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B7/00Control of exposure by setting shutters, diaphragms or filters, separately or conjointly
    • G03B7/08Control effected solely on the basis of the response, to the intensity of the light received by the camera, of a built-in light-sensitive device
    • G03B7/091Digital circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the present disclosure relates to a signal processing device, an imaging device, and a signal processing method.
  • An object of the present disclosure is to provide a signal processing device, an imaging device, and a signal processing method that can perform appropriate flicker correction even when, for example, a sharp change in luminance due to movement of an object occurs in a screen.
  • a motion judging unit for judging the presence or absence of motion A reference value determination unit that determines a reference value for determining a correction gain for correcting flicker, The reference value determination unit determines one of a first reference value and a second reference value closer to the luminance of the predetermined frame than the first reference value, as a reference value, according to the determination result of the motion determination unit. It is a signal processing device.
  • An imaging unit A motion judging unit for judging the presence or absence of motion; A reference value determination unit that determines a reference value for determining a correction gain for correcting flicker, The reference value determination unit determines one of a first reference value and a second reference value closer to the luminance of the predetermined frame than the first reference value, as a reference value, according to the determination result of the motion determination unit.
  • An imaging device An imaging device.
  • a motion determining unit determines whether there is a motion
  • a reference value determination unit determines a reference value for determining a correction gain for correcting flicker
  • the reference value determination unit determines whether the first reference value or the first reference value according to the determination result of the motion determination unit. Is a signal processing method in which one of the second reference values close to the luminance of a predetermined frame is determined as a reference value.
  • the present disclosure it is possible to perform appropriate flicker correction even when a sharp change in luminance due to the movement of an object occurs in a screen.
  • the effects described here are not necessarily limited, and may be any of the effects described in the present disclosure.
  • the contents of the present disclosure are not to be construed as being limited by the illustrated effects.
  • FIG. 1 is a block diagram illustrating an example of an internal configuration of the imaging device according to the embodiment.
  • FIG. 2 is a block diagram illustrating a configuration example of the flicker correction unit according to the embodiment.
  • FIG. 3A shows an example of frames input in chronological order
  • FIG. 3B is a graph in which the in-screen luminance average value calculated in one screen is plotted on a time axis.
  • FIG. 4 is a graph showing a temporal change in the reference value and the like determined by the reference value determination unit.
  • FIG. 5A is a graph showing a temporal change of a reference value or the like determined by a reference value determining unit, and FIG.
  • FIG. 5B is a graph showing a temporal change of a correction gain calculated based on an average luminance value in a screen and a reference value. It is.
  • FIG. 6 is a diagram referred to when describing a problem to be considered in the embodiment.
  • FIG. 7 is a diagram for explaining an example in which one screen is divided into a plurality of divided areas.
  • FIG. 8 is a diagram referred to when explaining another problem to be considered in the embodiment.
  • FIG. 9 is a diagram referred to when describing another problem to be considered in the embodiment.
  • FIG. 10 is a diagram referred to when explaining another problem to be considered in the embodiment.
  • FIG. 11 is a diagram for explaining an example of an effect obtained by the processing according to the embodiment.
  • 12A and 12B are diagrams for explaining an operation example of the correction gain adjustment unit according to the embodiment.
  • FIG. 13 is a flowchart illustrating a flow of processing performed in the imaging device according to the embodiment.
  • flicker will be described before describing the embodiments. Since flicker itself is a known phenomenon, only a brief description will be given. As described above, when a moving image is captured under illumination light such as a fluorescent lamp or an LED whose brightness varies with the power supply frequency, flickering of the captured image may occur. . Flicker occurs due to a difference between a fluctuation cycle of the brightness of the illumination light and an imaging cycle (hereinafter, also referred to as an “imaging cycle”). When a rolling shutter system such as a CMOS (Complementary Metal Oxide Semiconductor) image sensor is used as an image sensor, "in-plane flicker" (also referred to as "line flicker”) in which lines appear to flicker is generated.
  • CMOS Complementary Metal Oxide Semiconductor
  • the blinking period of the illumination light is determined by the power supply frequency, and the power supply frequency is either 50 Hz or 60 Hz worldwide.
  • the power supply frequency is 60 Hz, and in this case, the blinking cycle of the illumination light is 1/120 seconds.
  • the power supply frequency is 50 Hz, and in this case, the blinking cycle of the illumination light is 1/100 second.
  • flicker frequency the occurrence cycle of surface flicker
  • the imaging frequency which is the frequency of the imaging cycle of the imaging device, and the blinking frequency of illumination light.
  • FIG. 1 is a block diagram illustrating a configuration example of an imaging device (imaging device 100) according to the present embodiment.
  • the imaging apparatus 100 includes, for example, a control unit 1, an imaging device 2, which is an example of an imaging unit, a flicker correction unit 3, an output video signal generation unit 4, a video signal output unit 5, and an operation unit 6.
  • the control unit 1, the imaging device 2, the flicker correction unit 3, the output video signal generation unit 4, and the video signal output unit 5 are connected to each other via a bus B, and can transmit and receive various data and commands. ing.
  • the control unit 1 is configured by a microcomputer or the like, and controls each block configuring the imaging device 100.
  • the control unit 1 has a ROM (Read Only Memory) and a RAM (Random Access Memory) (not shown).
  • ROM Read Only Memory
  • RAM Random Access Memory
  • a program executed by the control unit 1 is stored.
  • the RAM is used as a work memory when the control unit 1 executes a program.
  • the program stored in the ROM may be executed by another configuration (for example, the flicker correction unit 3). Further, another configuration may be a configuration having a memory storing a program to be executed by itself.
  • the control unit 1 is connected to the operation unit 6, and an operation signal corresponding to an operation on the operation unit 6 is supplied to the control unit 1.
  • the control unit 1 executes control according to the operation signal.
  • the imaging device 2 is, for example, a global shutter type CMOS image sensor. As the imaging device 2, a CCD element of a global shutter system may be applied. The imaging device 2 generates a predetermined analog signal by photoelectrically converting imaging light imaged via an optical system (not shown) such as a lens for each imaging surface, and converts the analog signal into a digital signal. Is generated. Such image data is a color signal such as R (red), G (green), and B (blue). Image data generated by the imaging device 2 is supplied to the flicker correction unit 3.
  • the flicker correction unit 3 adjusts the gain of the image data supplied from the imaging device 2 (hereinafter, appropriately referred to as a correction gain). Then, by performing flicker correction processing using the correction gain on the image data, the flicker component of the surface flicker included in the image data is removed.
  • the image data that has been subjected to flicker correction processing by the flicker correction unit 3 is supplied to the output video signal generation unit 4.
  • the adjustment of the correction gain is performed independently for each of the color signal (red), the color signal (green), and the color signal (blue), for example.
  • the processing of the flicker correction unit 3 for each of the color signal (red), the color signal (green) and the color signal (blue) is basically the same, and the processing for each of these three signals will be described below. Is omitted. The details of the flicker correction unit 3 will be described later.
  • the output video signal generation unit 4 adjusts the image quality such as a correction process for a peripheral light drop of each frame of the image data supplied from the flicker correction unit 3, a predetermined interpolation process, a filtering process, a shading process, a white balance process, and the like. And signal processing for improving image quality.
  • the output video signal generation unit 4 performs well-known image processing such as color tone adjustment processing, luminance compression processing, and gamma correction on the image data supplied from the flicker correction unit 3 and inputs the image data to a predetermined display device. A video signal for performing the operation is appropriately generated.
  • the video signal generated by the output video signal generation unit 4 is supplied to the video signal output unit 5.
  • the video signal output unit 5 is an interface that outputs the video signal generated by the output video signal generation unit 4. For example, a video signal is output to the recording device via the video signal output unit 5.
  • the recording device includes a recording unit and a driver that records a video signal on the recording unit and reproduces the video signal from the recording unit.
  • the recording unit includes a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, a magneto-optical storage device, and the like.
  • the driver has a configuration corresponding to the configuration of the recording unit.
  • the recording device records the video signal supplied from the video signal output unit 5 in the recording unit.
  • a video signal is supplied to the display device via the video signal output unit 5.
  • the display device includes a display unit and a driver that drives the display unit and displays a video signal.
  • the display device displays the video signal supplied from the video signal output unit 5 on the display unit.
  • the video signal output via the video signal output unit 5 may be recorded, reproduced, or recorded and reproduced.
  • the recording device and the display device may be devices included in the imaging device 100, or may be external devices connected to the imaging device 100 via a wired or wireless connection.
  • the video signal output via the video signal output unit 5 may be supplied to a device (a server device or a personal computer) connected to the network via a network such as the Internet. Then, in a device connected to the network, a reproduction process and a recording process of the video signal may be performed.
  • a device a server device or a personal computer
  • a network such as the Internet
  • the operation unit 6 is a general term for a configuration in which an operation input is performed, such as a button key or a touch screen provided in the imaging apparatus 100.
  • the operation unit 6 may be a remote controller that remotely operates the imaging device 100. Various settings for the imaging device 100 are made using the operation unit 6.
  • FIG. 2 is a block diagram illustrating a configuration example of the flicker correction unit 3.
  • the flicker correction unit 3 according to the embodiment has, for example, a luminance calculation unit 31, a motion determination unit 32, a reference value determination unit 33, a correction gain calculation unit 34, a correction gain adjustment unit 35, and a correction processing unit 36. .
  • the brightness calculation unit 31 calculates the brightness of a predetermined frame input via the imaging device 2. Note that the luminance calculation unit 31 according to the embodiment calculates the luminance of each divided area obtained by dividing one screen.
  • the division area is formed, for example, by dividing one screen into 8160 areas.
  • the number of pixels included in each divided area has a different value according to the number of pixels in one screen.
  • the motion determining unit 32 determines whether or not there is a motion for each divided area.
  • the movement determining unit 32 outputs a determination result of the presence or absence of the movement to the reference value determining unit 33.
  • the reference value determination unit 33 determines a reference value for determining a correction gain for correcting flicker.
  • the reference value determination unit 33 determines one of the first reference value and the second reference value closer to the luminance of the predetermined frame than the first reference value according to the determination result of the motion determination unit 32, Is determined as a reference value.
  • the reference value determination unit 33 determines, for example, a reference value for each divided area AR described later.
  • the correction gain calculation unit 34 calculates a correction gain for each divided area using the reference value determined by the reference value determination unit 33.
  • the correction gain adjustment unit 35 adjusts the correction gain calculated by the correction gain calculation unit 34 using the correction gain in the surrounding divided area.
  • the correction processing unit 36 performs flicker correction by applying the adjusted correction gain adjusted by the correction gain adjustment unit 35 to the corresponding divided area.
  • Imaging device 100 (Overall operation example) Next, an operation example of the imaging device 100 will be described. First, an overall operation example of the imaging device 100 will be schematically described. For example, an operation input for instructing the operation unit 6 to start capturing a moving image is performed. In response to the operation input, the control unit 1 controls each unit of the imaging device 100 so that a moving image is captured.
  • a predetermined frame is output from the imaging device 2.
  • the flicker correction unit 3 performs flicker correction on a predetermined frame
  • the output video signal generation unit 4 performs signal processing on the flicker-corrected frame.
  • the frame subjected to the signal processing by the output video signal generation unit 4 is output from the video signal output unit 5, and the output frame is recorded and displayed.
  • the video signal output unit 5 may output one frame processed by the flicker correction unit 3 and the output video signal generation unit 4, or may output frames constituting one moving image at a time. You may make it.
  • the flicker correction unit 3 includes, for example, a luminance calculation unit 31, a reference value determination unit 33, a correction gain calculation unit 34, and a correction processing unit 36.
  • the luminance calculator 31 calculates the luminance of a predetermined frame.
  • FIGS. 3A and 3B are diagrams for explaining an operation example of the luminance calculation unit 31.
  • FIG. FIG. 3A shows frames FR1, FR2, and FR3 as frames input in time series.
  • the luminance calculator 31 uses the image data obtained by the imaging device 2 to calculate an integrated luminance value for one screen area. Then, by dividing the calculated integrated luminance value by the number of pixels for one screen area, an average value of the luminance in one screen area (hereinafter, appropriately referred to as “in-screen average luminance value A”) is calculated.
  • FIG. 3B is a graph in which the in-screen average brightness value A calculated in one screen is plotted on the time axis.
  • the vertical axis in FIG. 3B indicates the average luminance value A within the screen, and the horizontal axis indicates the number of frames (time axis).
  • the value of each axis may be indicated as an arbitrary unit (au: arbitrary @ unit) normalized using a predetermined reference value.
  • the average luminance value A within the screen of the frame FR1 is the largest
  • the average luminance value A within the screen of the frame FR2 is intermediate
  • the average luminance value A of the frame FR3 is the smallest.
  • the change pattern of the average luminance value A in the screen in the time direction is substantially synchronized with the flicker frequency.
  • the reference value determination unit 33 determines the reference value R.
  • the reference value determination unit 33 determines the reference value R by, for example, taking the average value of the luminance of a plurality of frames (preferably, frames within one or more periods of flicker). More specifically, the reference value determination unit 33 determines (calculates) the reference value R according to the following Equation 1.
  • Equation 1 R: reference value, A: average luminance value in the screen, n: frame (the larger the value, the past), C: coefficient, and N: number of added images.
  • FIG. 4 is a graph showing an example of a temporal change of the reference value R determined by the reference value determination unit 33.
  • the vertical axis in the graph shown in FIG. 4 indicates the average luminance value A or the reference value R in the screen, and the horizontal axis indicates the number of frames (time axis).
  • a line on which a rectangular mark is plotted indicates a temporal change of the in-screen luminance average value A
  • a line on which a circle mark is plotted indicates a temporal change of the reference value R. ing.
  • the correction gain calculator 34 calculates the correction gain G using the reference value R determined by the reference value determiner 33.
  • the correction gain calculation unit 34 calculates the correction gain G using, for example, the following Expression 2.
  • Equation 2 G: correction gain, R: reference value, A: average luminance value in a screen.
  • FIG. 5A is a graph similar to FIG. FIG. 5B is a graph showing a temporal change of the correction gain G calculated based on the in-screen average luminance value A and the reference value R.
  • the vertical axis in FIG. 5B indicates the correction gain G
  • the horizontal axis indicates the number of frames (time axis).
  • points plotted by rectangular marks on the dotted line indicate the correction gain G.
  • the correction gain G calculated by the correction gain calculator 34 is supplied to the correction processor 36.
  • the correction processing unit 36 calculates (for example, multiplies) the correction gain G for a predetermined frame. As a result, flicker correction is performed, and a decrease in image quality due to flicker is suppressed.
  • FIG. 6 shows a predetermined indoor space.
  • a lighting device 42 which is a flicker generation source, is attached to a ceiling 41 in the indoor space.
  • a window 43 is provided in the indoor space.
  • the window 43 is irradiated with sunlight SL.
  • the sunlight SL is schematically shown by oblique lines.
  • the sunlight is radiated inside the window 43 and the sunlight does not flicker. Therefore, there is a possibility that the correction gain G becomes smaller than the actual flicker amplitude due to this. For this reason, appropriate flicker correction is not performed in a portion (for example, the ceiling 41 or the wall portion 44) affected by flicker, and there is a possibility that the correction remains.
  • one screen is divided into a plurality of areas (blocks).
  • 8160 divided areas AR are formed by dividing one screen into 8160.
  • the divided area AR shown in FIG. 7 is not generally displayed and is not presented to the user, but may be displayed.
  • the luminance calculation unit 31, the reference value determination unit 33, the correction gain calculation unit 34, and the correction processing unit 36 perform the above-described processing for each divided area AR.
  • the brightness calculation unit 31 obtains an integrated brightness value in a predetermined divided area AR using the image data obtained by the imaging device 2. Then, by dividing the calculated integrated luminance value by the number of pixels in the predetermined divided area AR, an average luminance value A ′ in the divided area in the predetermined divided area AR is calculated.
  • the reference value determination unit 33, the correction gain calculation unit 34, and the correction processing unit 36 also perform the above-described processing for each divided area.
  • the reference value determination unit 33 determines a reference value R 'for each divided area AR
  • the correction gain calculation unit 34 calculates a correction gain G' for each divided area AR
  • the correction processing unit 36 calculates a correction gain G ' Is performed using. Accordingly, appropriate flicker correction is performed for each divided area AR, and the above-described problem can be avoided.
  • the correction processing unit 36 performs a correction process in which the correction gain G ′ is substantially 1, in other words, a process equivalent to not performing flicker correction. Therefore, it is possible to prevent a situation in which the flicker correction is performed in the divided area AR where flicker does not occur, and conversely, flicker is seen.
  • the correction gain G ′ is calculated in the same manner as the above-described processing, and appropriate flicker correction is performed using the correction gain G ′, so that the influence of flicker can be reduced. it can.
  • FIGS. 8 and 9 are the same space as the space shown in FIG.
  • a spherical object 45 is photographed near the upper left of a certain frame (for example, the twelfth frame 12).
  • the object 45 is a black object.
  • FIG. 9 for example, it is assumed that the object 45 has moved to the left in the next frame (frame 13).
  • the divided area AR1 is an area that changes from an area not included in the area of the object 45 (see FIG. 8) to an area included in the area of the object 45 (see FIG. 9) as the object 45 moves.
  • FIG. 10 is a graph showing a temporal change of the average luminance value A ′ and the reference value R ′ in the divided area of the divided area AR1.
  • the vertical axis indicates the average luminance value A ′ in the divided area or the reference value R ′
  • the horizontal axis indicates the number of frames (time axis).
  • the reference value R ′ is calculated based on, for example, a correlation with a past frame, as shown in FIG. 10, the reference value R ′ does not follow a steep change of the average luminance value A ′ in the divided area and gradually decreases. . Accordingly, the reference value R 'for calculating the correction gain G' is not appropriate. The same can be said for the divided areas AR other than the divided area AR1 in which the object 45 moves.
  • the average luminance value A ′ in the divided area may increase rapidly with the movement of the moving subject.
  • the average luminance value A ′ in the divided area of the divided area AR that originally exists in the area of the object 45 and is outside the area of the object 45 with the movement of the object 45 may increase rapidly.
  • the motion determining unit 32 determines whether or not there is a motion (change due to a moving subject) for each divided area AR. Then, different flicker corrections are performed according to the presence or absence of motion. The motion determining unit 32 determines whether or not there is a motion by comparing, for example, the current frame with the previous frame having the same flicker cycle. Note that another method may be applied as a method of determining the presence or absence of a motion.
  • the presence / absence of motion is determined using the average luminance value A ′ in the divided area of the target divided area AR and the average luminance value A ′ in the divided area of the neighboring divided area AR.
  • a method of judging the presence / absence of motion only by a change in the average luminance value A ′ in the divided area of one divided area AR may be additionally applied.
  • the movement determination unit 32 may determine that there is a movement in the attention target divided area AR when both of the methods determine that there is a movement.
  • the reference value determination unit 33 sets one of a first reference value and a second reference value closer to the luminance of a predetermined frame than the first reference value as a reference value according to the determination result of the motion determination unit 32. decide. For example, the reference value determination unit 33 determines the first reference value as the reference value when the motion determination unit 32 determines that there is no motion, and determines the first reference value as the reference value when the motion determination unit 32 determines that there is motion. , A second reference value.
  • the reference value determining unit 33 determines the first value based on the luminance of the first frame and the luminance of a frame different from the first frame. Is determined, and when the motion determining unit 32 determines that there is a motion, a second reference value different from the first reference value is determined based on the luminance of the second frame.
  • the reference value determination unit 33 multiplies the luminance of the divided area in the predetermined frame by the first coefficient for the divided area AR determined to have no motion by the motion determining unit 32.
  • the reference value R ′ is determined, and the brightness of the divided area in the predetermined frame is multiplied by a second coefficient different from the first coefficient for the divided area AR determined to have a motion by the motion determining unit 32.
  • the reference value R ′ is determined.
  • the second coefficient is, specifically, a coefficient set such that the ratio of the average luminance value A ′ in the divided area of the divided area AR in the frame temporally close to the current frame is increased.
  • the reference value determination unit 33 determines a reference value for the divided area AR determined to have no motion based on the following Expression 3.
  • R 1 is a reference value
  • a 1 is a luminance average value in a divided area in a predetermined divided area AR of the current frame
  • a 2 to A 4 are predetermined divided areas AR (A 1 5 shows an average luminance value in a divided area in a divided area at the same position as the area).
  • the motion determining unit 32 determines that there is no motion in the predetermined divided area AR, the average luminance value A ′ in the divided area of the divided area AR in each of a plurality of frames is determined. Is set to an equal value (an example of a first coefficient). That is, when the motion determination unit 32 determines that there is no motion in the predetermined divided area AR, the average of the average luminance values A ′ in the divided areas of the divided areas AR in each of the plurality of frames is set as the reference value R ′. Is set.
  • the reference value determining unit 33 uses the reference value R ′ (an example of the first reference value) determined by, for example, Equation 3 to determine a correction gain. Determine as the correction gain.
  • R ′ an example of the first reference value
  • the reference value determination unit 33 determines a reference value for the divided area AR determined to have a motion based on Equation 4 below.
  • Expression 4 (an example of a second coefficient) is set as a coefficient for the divided area AR in the current frame (a frame having the same timing as the current frame used in Expression 3), and other frames are set. Is set as a coefficient for the divided area AR in. That is, in the case of Expression 4, the average luminance value A ′ in the divided area of the divided area AR in the current frame becomes equal to the reference value R ′. As described above, different coefficients are used depending on whether or not there is a motion in the predetermined divided area AR.
  • the reference value determining unit 33 determines a reference value R ′ different from the reference value R determined by Expression 3, for example, a reference value R ′ determined by Expression 4 (the value of the second reference value). Example) is determined as the correction gain for determining the correction gain.
  • the division area AR is divided between the frames 12 and 13 as the moving subject moves. Even when the average luminance value A 'in the area sharply decreases, the average luminance value A' in the divided area becomes the reference value R '. Therefore, it is possible to obtain an appropriate reference value R ′. Then, based on the obtained reference value R ′, the correction gain calculation unit 34 calculates the correction gain G ′ using, for example, Expression 2.
  • the process in which the reference value determination unit 33 determines the reference value R ′ of the divided area AR determined to have no motion is appropriately referred to as a first process.
  • the process (for example, the process based on the above-described Expression 4) in which the reference value determination unit 33 determines the reference value R ′ of the divided area AR determined to have a motion is appropriately referred to as a second process.
  • the correction gain adjustment unit 35 calculates a simple average of the correction gains G ′ of nine divided areas AR including a certain divided area AR21 and a divided area AR around the divided area AR21. , The correction gain G ′ of the divided area AR21 is adjusted. Further, for example, as shown in FIG. 12B, the correction gain adjustment unit 35 calculates a weighted average of the correction gains G ′ of 25 divided areas AR including a certain divided area AR21 and a divided area AR around the divided area AR21. Thereby, the correction gain G ′ of the divided area AR21 may be adjusted.
  • the number given to each divided area AR in FIG. 12B indicates a coefficient (weight).
  • the user may be allowed to set the range of the peripheral divided area AR for calculating the (weighted) average for the predetermined divided area AR21.
  • the correction gain G ′ of the divided area AR21 adjusted by the correction gain adjustment unit 35 is supplied to the correction processing unit 36.
  • the correction processing unit 36 performs flicker correction on the corresponding divided area AR using the corrected correction gain G ′.
  • correction gain adjustment processing by the correction gain adjustment unit 35 may be performed only on the divided area AR determined to have motion, or may be performed on all divided areas including the divided area AR determined to have no motion. It may be performed for the area AR.
  • step ST11 the luminance calculation unit 31 performs a luminance calculation process of calculating the luminance of each divided area AR (for example, the average luminance value A ′ in the divided area). Then, the process proceeds to step ST12.
  • the motion determining unit 32 performs a motion determining process of determining whether or not there is a motion for each divided area AR.
  • the judgment result of the motion judging unit 32 is supplied to the reference value determining unit 33.
  • the process of step ST13 when the motion determining unit 32 determines that there is no motion, the process proceeds to step ST14.
  • the reference value determination unit 33 performs a reference value determination process.
  • the process is a process of determining the reference value R ′ of the divided area AR determined to have no motion
  • the reference value determination unit 33 determines the reference value R ′ by the first process.
  • step ST13 when the motion determining unit 32 determines that there is a motion, the process proceeds to step ST15.
  • step ST15 the reference value determination unit 33 performs a reference value determination process.
  • the process is a process of determining the reference value R ′ of the divided area AR determined to have a motion, the reference value determination unit 33 determines the reference value R ′ by the second process.
  • step ST16 the correction gain calculator 34 calculates a correction gain G 'for each divided area AR. That is, the correction gain calculation unit 34 calculates the correction gain G 'using the average luminance value A' in the divided area and the reference value R 'calculated for each divided area AR. Then, the process proceeds to step ST17.
  • step ST17 the correction gain adjustment unit 35 performs a correction gain adjustment process for adjusting the correction gain G '. Through such processing, the correction gain G ′ is adjusted. The adjusted correction gain G ′ is supplied to the correction processing unit 36. Then, the process proceeds to step ST18.
  • step ST18 the correction processing unit 36 performs the correction process by calculating the corrected correction gain G 'for the corresponding divided area AR.
  • flicker correction is performed.
  • the correction gain G ′ is obtained for each of the divided areas AR described above, a correction gain for each pixel may be obtained.
  • the correction gain for each pixel may be obtained by linearly interpolating the correction gain G ′ in the plurality of divided areas AR.
  • the second coefficient is set such that the reference value is equal to the average luminance value in the divided area of the divided area in the current frame.
  • the present invention is not limited to this. Absent.
  • the second coefficient may be set so as to include the average luminance value in the divided area of the divided area in the past or future frame.
  • the arithmetic expression for acquiring the luminance and the reference value may be an arithmetic expression other than that described in the embodiment.
  • the second coefficient is set such that the ratio of the luminance of the divided area in a frame temporally close to the current frame is increased.
  • the frame temporally close to the current frame includes the current frame, past frames within a predetermined number of frames from the current frame (frames temporally preceding the current frame), and , At least one of a future frame (a frame temporally later than the current frame) within a range of a predetermined number of frames from the current frame.
  • flicker correction is performed on the current frame after the future frame for the current frame is acquired (flicker correction is delayed).
  • a future frame is stored in the frame memory with respect to the current frame, and the motion determining unit determines whether or not there is a motion for each divided area using the current frame and the future frame. Is also good. Then, when it is determined that there is no motion, the reference value determination unit may determine the reference value based on the luminance of the divided area in the future frame. According to such processing, since the reference value is not determined based on the past correlation, an appropriate reference value can be determined.
  • the current frame means a frame stored and delayed in the frame memory
  • the future frame means a frame that has not been delayed.
  • the average value of the luminance of each pixel in the predetermined divided area (the average luminance value in the divided area) is described as an example of the luminance, but the present invention is not limited to this. Instead of the average value of the luminance of each pixel in the divided area, an integrated pixel level obtained by integrating the luminance of each pixel may be used as the luminance.
  • the imaging device has been described as an example, but the present invention is not limited to this.
  • the present disclosure can be configured as, for example, a signal processing device having no imaging device and having a flicker correction unit.
  • a signal processing device may be referred to as a camera control unit, a baseband processor unit, or the like.
  • the present disclosure can be realized in any form, such as a method, a program, and a system, in addition to the device.
  • a program that executes the function described in the above-described embodiment can be downloaded, and a device that does not have the function described in the embodiment (for example, a smartphone having an imaging function,
  • a device that does not have the function described in the embodiment for example, a smartphone having an imaging function
  • the control described in the embodiment can be performed in the device.
  • the present disclosure can also be realized by a server that distributes such a program.
  • the present disclosure may have the following configurations.
  • a motion judging unit for judging the presence or absence of motion A reference value determination unit that determines a reference value for determining a correction gain for correcting flicker, The reference value determination unit determines one of a first reference value and a second reference value closer to the luminance of a predetermined frame than the first reference value according to the determination result of the motion determination unit. Determined as a signal processing device.
  • the reference value determining unit determines a first reference value as the reference value when the motion determining unit determines that there is no motion, and determines that there is motion by the motion determining unit.
  • the reference value determination unit when the motion determination unit determines that there is no motion, based on the brightness of the first frame and the brightness of a frame different from the first frame, the first reference value The second reference value different from the first reference value is determined based on the luminance of the second frame when the motion determination unit determines that there is a motion.
  • Signal processing device (4) The signal processing device according to (3), wherein the first frame and the second frame have the same timing in terms of time. (5) The signal processing device according to (4), wherein the first frame is temporally closer to a current frame than a frame different from the first frame. (6) The signal processing device according to (5), wherein the reference value determination unit sets a second reference value so as to be equal to luminance in a current frame.
  • the reference value determination unit determines the first reference value using a first coefficient with respect to the luminance of the first frame when the motion determination unit determines that there is no motion, When the determination unit determines that there is a motion, the second reference value is determined using a second coefficient different from the first coefficient with respect to the luminance of the second frame (3)
  • the signal processing device any one of (1) to (6).
  • the second coefficient is a coefficient set such that a ratio of luminance in a frame temporally close to a current frame is increased.
  • the frame temporally close to the current frame includes a current frame, a past frame within a predetermined number of frames for the current frame, and a future frame within a predetermined number of frames for the current frame.
  • the motion determining unit determines whether or not there is a motion for each divided area formed by dividing one screen, The signal processing device according to any one of (1) to (11), wherein the reference value determination unit that determines a reference value for determining a correction gain for correcting flicker determines the reference value for each of the divided areas. . (13) The signal processing device according to (11), further including a correction gain adjustment unit that adjusts the correction gain calculated by the correction gain calculation unit using a correction gain in a peripheral divided area. (14) The signal processing device according to (13), wherein the correction gain adjustment unit adjusts at least a correction gain in a divided area determined to have motion using a correction gain in a peripheral divided area.
  • the signal processing device adjusts a correction gain in a divided area determined to have no motion using a correction gain in a peripheral divided area.
  • the signal processing device according to any one of (1) to (15), further including a brightness calculation unit configured to calculate brightness for each predetermined divided area.
  • a motion determining unit determines whether there is a motion
  • a reference value determination unit determines a reference value for determining a correction gain for correcting flicker
  • the reference value determination unit determines one of a first reference value and a second reference value closer to the luminance of a predetermined frame than the first reference value according to the determination result of the motion determination unit. Determined as the signal processing method.
  • a motion determining unit determines whether there is a motion
  • a reference value determination unit for determining a reference value for determining a correction gain for correcting flicker, wherein the reference value determination unit determines whether the first reference value is a first reference value or a first reference value according to a determination result of the motion determination unit;
  • a program for causing a computer to execute a signal processing method for determining one of second reference values closer to the luminance of a predetermined frame than the value as the reference value.
  • a correction gain calculation unit 35: a correction gain adjustment unit, 36: a correction processing unit

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

Le présent dispositif de traitement de signaux comporte un unité de détermination de mouvement qui détermine s'il existe un mouvement, et un unité de détermination de valeur de référence qui détermine une valeur de référence servant à déterminer un gain de correction pour corriger le papillotement. L'unité de détermination de valeur de référence détermine, en tant que valeur de référence et en fonction du résultat de la détermination effectuée par l'unité de détermination de mouvement, l'une d'une première valeur de référence et d'une seconde valeur de référence qui est plus proche de la luminance d'une trame prédéterminée que la première valeur de référence.
PCT/JP2019/023003 2018-08-02 2019-06-11 Dispositif de traitement de signaux, dispositif d'imagerie, et procédé de traitement de signaux WO2020026595A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018146157 2018-08-02
JP2018-146157 2018-08-02

Publications (1)

Publication Number Publication Date
WO2020026595A1 true WO2020026595A1 (fr) 2020-02-06

Family

ID=69232399

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/023003 WO2020026595A1 (fr) 2018-08-02 2019-06-11 Dispositif de traitement de signaux, dispositif d'imagerie, et procédé de traitement de signaux

Country Status (1)

Country Link
WO (1) WO2020026595A1 (fr)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1175109A (ja) * 1997-06-27 1999-03-16 Matsushita Electric Ind Co Ltd 固体撮像装置
JP2009081684A (ja) * 2007-09-26 2009-04-16 Panasonic Corp フリッカ低減装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1175109A (ja) * 1997-06-27 1999-03-16 Matsushita Electric Ind Co Ltd 固体撮像装置
JP2009081684A (ja) * 2007-09-26 2009-04-16 Panasonic Corp フリッカ低減装置

Similar Documents

Publication Publication Date Title
US10397486B2 (en) Image capture apparatus and method executed by image capture apparatus
US10009551B1 (en) Image processing for merging images of a scene captured with differing camera parameters
WO2017217137A1 (fr) Dispositif de commande d'imagerie, procédé de commande d'imagerie, et programme
JP2009212627A (ja) 画像処理装置、フリッカ低減方法、撮像装置及びフリッカ低減プログラム
JP2017022610A (ja) 画像処理装置、画像処理方法
JP2011109373A (ja) 撮像装置、撮像処理方法及びプログラム
JP6047686B2 (ja) 撮影装置
JP6911850B2 (ja) 画像処理装置、画像処理方法およびプログラム
JP2020102666A (ja) 画像処理装置、撮像装置、画像処理方法、及びプログラム
US20180278905A1 (en) Projection apparatus that reduces misalignment between printed image and projected image projected on the printed image, control method therefor, and storage medium
JP2013225724A (ja) 撮像装置及びその制御方法、プログラム、並びに記憶媒体
US11310440B2 (en) Image processing apparatus, surveillance camera system, and image processing method
KR20120122574A (ko) 디지털 카메라 장치에서 영상 처리 장치 및 방법
JP2010183460A (ja) 撮像装置およびその制御方法
JP5585117B2 (ja) マルチディスプレイシステム、マルチディスプレイ調整方法およびプログラム
WO2020026595A1 (fr) Dispositif de traitement de signaux, dispositif d'imagerie, et procédé de traitement de signaux
JPWO2018105097A1 (ja) 画像合成装置、画像合成方法、及び画像合成プログラム
JP6157274B2 (ja) 撮像装置、情報処理方法及びプログラム
JP2010183461A (ja) 撮像装置およびその制御方法
JP6725105B2 (ja) 撮像装置及び画像処理方法
JP2007267170A (ja) 彩度調整機能を有する電子カメラ、および画像処理プログラム
JP2020036162A (ja) 画像処理装置、撮像装置、画像処理方法、及びプログラム
JP2007282161A (ja) 輝度調整装置および輝度調整制御方法
WO2021038692A1 (fr) Dispositif d'imagerie, procédé d'imagerie, et programme de traitement vidéo
JP5789330B2 (ja) 撮像装置およびその制御方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19844895

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19844895

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP