WO2013009099A2 - 블러 처리 장치 및 방법 - Google Patents
블러 처리 장치 및 방법 Download PDFInfo
- Publication number
- WO2013009099A2 WO2013009099A2 PCT/KR2012/005512 KR2012005512W WO2013009099A2 WO 2013009099 A2 WO2013009099 A2 WO 2013009099A2 KR 2012005512 W KR2012005512 W KR 2012005512W WO 2013009099 A2 WO2013009099 A2 WO 2013009099A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- blur
- image
- light signal
- reflected light
- occurs
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000003672 processing method Methods 0.000 claims description 23
- 238000001914 filtration Methods 0.000 claims description 15
- 230000002093 peripheral effect Effects 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 17
- 230000010354 integration Effects 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
- G01S17/894—3D imaging with simultaneous measurement of time-of-flight at a 2D array of receiver pixels, e.g. time-of-flight cameras or flash lidar
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/02—Systems using the reflection of electromagnetic waves other than radio waves
- G01S17/06—Systems determining position data of a target
- G01S17/08—Systems determining position data of a target for measuring distance only
- G01S17/10—Systems determining position data of a target for measuring distance only using transmission of interrupted, pulse-modulated waves
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/483—Details of pulse systems
- G01S7/486—Receivers
- G01S7/4865—Time delay measurement, e.g. time-of-flight measurement, time of arrival measurement or determining the exact position of a peak
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S7/00—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
- G01S7/48—Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S17/00
- G01S7/497—Means for monitoring or calibrating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Definitions
- the present invention relates to a technique for removing blur generated in an image.
- An image generated by using a time-of-flight camera includes a phase difference between an irradiated light signal (for example, an infrared signal) and a reflected light signal reflected back to a subject during an integration time every frame. Obtained by calculation. When motion occurs in the camera or the subject for a time shorter than the exposure time, a change may occur in the phase of the reflected light signal.
- an irradiated light signal for example, an infrared signal
- the depth information obtained through the reflected light signal has an incorrect value, which is shown in the form of a blur in the image.
- This phenomenon may occur for a similar reason to blur occurring in color cameras.
- the ToF camera and the color camera are different from each other in the method of obtaining an image, and the blur phenomenon occurring in the ToF camera is different from the blur phenomenon of the color image.
- the blur processing apparatus includes a control unit for generating a control signal, a sensor unit for integrating electrons generated by a reflected light signal from which an irradiation light signal is reflected from an object, according to the control signal, and an electron integrated for each control signal. It may include a blur determining unit for determining whether the blur using the relationship between the two.
- the blur determining unit of the blur processing apparatus may determine whether the blur is compared by comparing the relationship between the reference charge quantity relationship information indicating the case where there is no blur and the amount of the accumulated electrons.
- the blur determination unit of the blur processing apparatus uses the phase difference of the reflected light signal to determine whether blur occurs in an image acquired by the ToF camera. It can be determined.
- the blur determination unit may calculate depth information by calculating a phase difference between a plurality of control signals having different phase differences and the reflected light signal.
- the sensor unit may acquire an amount of charge received by each of a plurality of control signals having a phase difference different from that of the reflected light signal, and the blur determiner may be defined due to a different phase difference between the obtained amount of charge and the control signal. It may be determined whether a blur occurs in the image depending on whether the relationship between the respective charge amounts is out of the normal range.
- the sensor unit may obtain n (n is a natural number) reflected light signals, and the blur determiner may calculate n depth information by calculating a phase difference between a plurality of control signals having different phase differences and the n reflected light signals, An average value of the calculated n depth informations may be calculated.
- the blur determining unit may determine that blur has occurred in the image when at least one of the calculated n depth information is not flat.
- the blur processing apparatus replaces the generated blur by replacing a pixel value in which the blur occurs with a normal pixel value around which no blur occurs in an image obtained through a ToF type camera. Can be removed.
- the image processor may replace the depth information of the blur area in which the blur occurs with a normal pixel value of the peripheral area in which the blur does not occur, based on a neighboring frame having a different time than the frame in which the blur occurs in the image. .
- the blur processing apparatus may include a model identification unit identifying a blur model corresponding to a structure of a ToF camera, a search unit searching for a pattern associated with the blur model in an image acquired through the ToF camera; And an image processor configured to remove blur from the image by filtering a blur region within the searched pattern.
- the image processor of the blur processing apparatus may filter the blur area on an r-theta space.
- the blur processing method includes generating a plurality of control signals having different phases, integrating electrons generated by the reflected light signal from which an irradiation light signal is reflected from an object according to the control signal, and the control signal. It may include determining whether the blur using the relationship between the amount of electrons for each star.
- the determining of the blur in the blur processing method may include comparing the relationship between the reference charge amount relationship information indicating the absence of blur and the relationship between the integrated electrons and determining whether the blur occurs.
- determining whether the blur occurs in the blur processing method may include blurring an image obtained through the ToF camera using the phase difference of the reflected light signal. It may include determining whether it has occurred.
- the determining of whether the blur occurs in the blur processing method may include calculating depth information by calculating a phase difference between a plurality of control signals having different phase differences and the reflected light signal.
- the determining of whether or not the blur occurs in the blur processing method may include calculating a phase difference between the acquired n (n is natural numbers) reflected light signals and a plurality of control signals having different phase differences, and calculating The method may include calculating n depth information using the calculated result and calculating an average value of the calculated n depth information.
- the blur processing method when a blur occurs as a result of the determination, in the image acquired through a ToF type camera, the blur value is replaced with a normal pixel value around the blur where no blur occurs. It may further comprise the step of removing.
- the removing of the generated blur may include depth information of the blur area in which the blur occurs, based on a neighboring frame having a different time from the frame in which the blur occurs in the image, and thus the normal pixels around the blur that do not occur. Substituting the value may include removing the generated blur.
- the blur processing method may further include identifying a blur model corresponding to a structure of a ToF camera, retrieving a pattern associated with the blur model from an image obtained through the ToF camera, and the retrieved image.
- the method may include removing the blur from the image by filtering a blur area generated in the pattern.
- the filtering of the blur region generated in the detected pattern in the blur processing method may include filtering the blur region on an r-theta space.
- the relationship between the plurality of reflected light signals reflected from the object may be used to efficiently determine whether or not a blur occurs in the image acquired by the ToF type camera.
- the generated blur can be easily removed by replacing the blur generated in the image with the normal pixel value of the surroundings where no blur has occurred.
- FIG. 1 is a block diagram showing an embodiment of a blur processing apparatus.
- FIG. 2 is a diagram illustrating an embodiment of a pixel constituting a sensor unit
- FIG. 3 is an example of a timing diagram between a reflected light signal and a control signal.
- 4 to 6 are diagrams illustrating an example of obtaining a reflected light signal for determining whether blur occurs in an image.
- FIG. 7 and 8 are diagrams illustrating a relationship between a reflected light signal and a control signal, respectively, when there is no movement of a subject and when there is a movement of a subject, according to an exemplary embodiment.
- FIG. 9 is a graph illustrating depth information of a blur area in which blur occurs in an image, according to an exemplary embodiment.
- FIG. 10 is a diagram illustrating an association between a blur image and depth information, according to an exemplary embodiment.
- FIG. 11 illustrates an association between depth information and a blur model, according to an exemplary embodiment.
- FIG. 12 illustrates an example of removing blur in an image, according to an exemplary embodiment.
- FIG. 13 is a flowchart illustrating a procedure of a blur processing method according to an exemplary embodiment.
- FIG. 14 is a block diagram showing a configuration of a blur processing apparatus according to another embodiment.
- 15 is a diagram illustrating an example of filtering a blur area using a blur model.
- 16 is a flowchart illustrating a procedure of a blur processing method according to another embodiment.
- FIG. 1 is a block diagram showing an embodiment of a blur processing apparatus.
- the blur processing apparatus 100 may include a controller 110, a sensor 120, a blur determiner 130, and an image processor 140.
- the blur processing apparatus 100 illustrated in FIG. 1 may be implemented as a ToF camera.
- the light irradiator 102 is included in the ToF camera may be possible.
- the ToF type camera measures the distance from the camera to the object by using a phase difference between the irradiated light signal (for example, an infrared signal) irradiated to the object 104 and the reflected light signal reflected by the object 104 and returned. Depth image that represents can be generated.
- the light irradiator 102 may irradiate the irradiated light signal to the object 104.
- the sensor unit 120 may sense the reflected light signal from which the irradiation light signal irradiated from the light irradiator 102 is reflected by the object 104.
- the sensor unit 110 may include an optical sensing device such as a pinned photo diode (PPD), a photogate, a charge coupled device (CCD), or the like.
- PPD pinned photo diode
- CCD charge coupled device
- the object 104 refers to a subject and is a subject to take a picture.
- the controller 110 generates a plurality of control signals having different phases.
- the control signal is a signal capable of controlling the timing at which the sensor unit 120 accumulates the electrons generated by sensing the reflected light signal.
- FIG. 2 is a diagram illustrating an embodiment of a pixel constituting a sensor unit.
- the pixel 200 may include a detector 210 and a photogate, a gate (gate-A 221 and gate-B 222), and an integrated unit 231 and 232.
- the detector 210 generates the electrons by receiving the reflected light signal.
- the gate-A 221 and the gate-B 222 may transfer electrons generated by the detector 210 to the integrated units 231 and 232, respectively.
- a plurality of gates for example, a gate-A 221 and a gate-B 222, are provided to selectively transfer electrons to a plurality of different integrated units 231 and 232 according to a control signal. .
- the accumulators 231 and 232 may accumulate the transferred electrons.
- the integration time or period of the electrons can be predefined.
- the accumulators 231 and 232 may be defined to accumulate electrons for a certain time, to emit electrons when counting the amount of accumulated electrons, and then to accumulate electrons again at the next electron integration timing. have.
- the on / off of the gate may be controlled by the control signal described above.
- 3 is an embodiment of a timing diagram between a reflected light signal and a control signal.
- the first control signal and the second control signal may control the gate-A 221 and the gate-B 222 of FIG. 2, respectively, and exhibit phase differences of 180 degrees.
- the half period of the first and second control signals may be, for example, 25 ns.
- electrons may be generated in the detector 210 while t ON where the reflected light signal is high.
- some of the generated electrons are integrated in the integration unit 231 via the gate-A 221 during the first control signal related to the gate-A 221, which is a high value t ON -t ⁇ OF .
- the remaining part of the generated electrons is the gate-B 222, while the first control signal is converted to a low value and the second control signal related to the gate-B 222 is a high value t? OF .
- the integrated unit 232 is integrated.
- the electrons generated during the t ON in which the reflected light signal is a high value are the integrator 231 associated with the gate-A 221 because the gate-A 221 is turned on during t ON -t ⁇ OF. ), And the gate-B 222 may be turned on during t ⁇ OF to be transferred to the integrated unit 232 associated with the gate-B 222.
- the blur determination unit 130 determines whether the blur is blur using a relationship between the amount of electrons (hereinafter, referred to as charge amount) accumulated for each control signal.
- the blur determination unit 130 may obtain a phase difference between the reflected light signal and the irradiated light signal using control signals having different phases. For example, since the sensor unit 120 repeatedly acquires the reflected light signal reflected and returned during the integration time of the ToF type camera, the blur determiner 130 may obtain depth information based on the reflected light signal.
- C 1 to C 4 refer to a control signal
- Q 1 to Q 4 refer to a charge amount
- t d is depth information.
- the ToF type camera may generate control signals having L (L is a natural number) phases different from each other.
- the ToF camera may be configured in an L-Phase / M-tap method having M (M is a natural number) charge storage spaces.
- M is a natural number
- the ToF type camera has four control signals' C 1 ',' C 2 ',' C 3 ',' C 4 having a phase difference of 90 degrees from each other. 'Can be created.
- the sensor unit 120 acquires the charge amount 'Q 1 ' by the reflected light signal and the control signal 'C 1 ', obtains the charge amount 'Q 2 ' by the reflected light signal and the control signal 'C 2 ', and reflects the reflected light. obtaining a charge amount 'Q 3' by the signal and the control signal 'C 3' and, it is possible to obtain in sequence the charge amount 'Q 4' by the reflected light signal and the control signal 'C 4'.
- the 4-phase / 1-tap method of FIG. 4, the 4-phase / 2-tap method of FIG. 5, and the 4-phase / 4-tap method of FIG. 6 may indicate that the structure of the ToF camera is different.
- the sensor unit 120 may have a different method of acquiring a phase difference between the reflected light signal and the control signals during the exposure time according to the structure of the ToF camera.
- control signals 4 to 6 illustrate four examples of generating four control signals and acquiring four charges
- the number of control signals may be smaller or larger than four according to an embodiment.
- the blur determination unit 130 generates a blur in an image acquired by a ToF camera using the relationship between the obtained charge amounts 'Q 1 ', 'Q 2 ', 'Q 3 ', and 'Q 4 '. Determine whether or not. For example, the blur determination unit 130 may calculate a first difference value Q 1 -Q 2 , which is a difference between the amounts of charges obtained while the control signals C 1 and C 2 are high. In addition, the blur determination unit 130 may calculate a second difference value Q 3 -Q 4 , which is a difference between the amounts of charges obtained while the control signals C 3 and C 4 are high.
- the blur determination unit 130 divides the second difference value Q 3 -Q 4 by the first difference value Q 1 -Q 2 to divide the value Q 3 -Q 4 / Q 1 -Q 2.
- the depth information t d may be calculated by applying an arctangent function.
- the timing diagram of the control signals C 1 to C 4 in the embodiment of the 4-phase / 2-tap method shown in FIG. 5 represents one cycle for acquiring the charges Q 1 to Q 4 , this is shown.
- Q 1 to Q 4 can be obtained n times each.
- the first difference value described above may be nQ 1 -nQ 2
- the second difference value may be nQ 3 -nQ 4 .
- the depth information t d in one pixel may be represented by Equation 1 below.
- a change may occur in a phase of a reflected light signal sensed by at least one of the pixels included in the sensor unit 120.
- FIGS. 7 and 8 are diagrams illustrating a relationship between a reflected light signal and a control signal, respectively, when there is no movement of a subject and when there is a movement of a subject, according to an exemplary embodiment. 7 and 8 are only conceptual views for better understanding, other embodiments are not limitedly interpreted by FIGS. 7 and 8.
- the reflected light signal is reflected at the circle point R, and the control signal controls the pixel of the sensor unit 120 that senses the reflected light signal at the corresponding point.
- the amount of charges generated at the first high value and the second high value of the control signal is equal to Qa, respectively.
- the reflected light signal when the object to which the irradiated light signal is reflected changes from the chair as an image to the background (refer to the circle point R at the time points t0 and t1) as shown in FIG. 8, the reflected light signal reaches the pixel.
- the time may be delayed from t1 to t1 '.
- a phase change may occur in the reflected light signal sensed by the pixel.
- the amount of charge generated during the first high value and the amount of charge generated during the second high value are different from each other by Qa and Qa '.
- the depth value is determined using charges generated during two high values of the control signal, in the embodiment of FIG. 8, one pixel value (corresponding to the circle point R) in the depth image using Qa and Qa '. Will cause to blur.
- Blur determination unit 130 the control signals of the n cycles (C 1, C 2) of, to the phase change of the reflected light signal generation cell, m cycles C 1 and the charge amount difference between the C 2 value of the first difference value ( Q 1 -Q 2 ), and the first difference value (the difference in the amount of charges between C 1 and C 2 in the remaining nm periods in which a phase difference change occurs and has a new phase) ) Is calculated.
- the blur determination unit 130 of the n period of the control signals (C 3 , C 4 ), the second difference value (Q 3) that is the difference in the amount of charge between C 3 and C 4 of the m period before the phase difference change occurs Q 4 ), and a second difference value (a difference value of charge amount between C 3 and C 4 of the remaining nm period having a new phase with a phase difference change) ) Can be calculated.
- the depth information t d may be represented by Equation 2 below.
- Equation 3 The function t d using m as a variable is first derivative and can be represented by t d 'in Equation 3 below.
- Equation 3 the same value as Equation 4 can be obtained.
- the change in depth information t d caused by the phase difference change can have one local maximum or local minimum in m obtained.
- the position of the m period changes depending on the depth information before and after the movement.
- the blurring in the image generated by the ToF camera does not occur in the form of taking the median value of both ends or increasing / decreasing monotonically. Since the image is acquired in a manner different from that of the color camera due to the characteristics of the ToF camera, the method of removing the blur from the image generated by the ToF camera may be completely different from the method of removing the blur of the color camera.
- the blur determiner 130 may determine the first difference between the charge amounts 'Q 1 ' and 'Q 2 ' obtained through the control signals C 1 and C 2 and the amount of charges obtained through the control signals C 3 and C 4 .
- the second difference between Q 3 ′ and Q 4 ′ may be calculated n times, and depth information may be calculated n times using the calculated result.
- the blur determination unit 130 calculates the depth information using the amount of charges obtained for each cycle of the control signal, and a frame of the depth image is generated using the average value of the n depth information calculated in this way. May be
- the blur determining unit 130 may determine that blur has occurred in the depth image when at least one of the calculated n depth information is not flat. For example, when the phase of the reflected light signal has a constant value, the calculated n depth informations have a constant value. On the other hand, when the phase of the reflected light signal does not have a constant value, the calculated n depth information does not have a constant value but is calculated as the non- constant value, so that the blur determination unit 130 has the above value in the image. It can be determined that blur has occurred in the area.
- FIG. 9 is a graph illustrating depth information of a blur area in which blur occurs in an image, according to an exemplary embodiment.
- the blur determiner 130 may determine whether blur occurs by referring to depth information of the moving object. According to an example, the blur determiner 130 may determine that blur occurs in an area where the depth information associated with each pixel coordinate (pixel value) does not have a uniform value.
- the blur determination unit 130 determines the region having a different value from the other depth information as the blur region 910 in which the blur is generated. can do. For example, as a result of enumerating depth information between pixel coordinates, when the depth information graph shows a point shape as shown in FIG. 9, the blur determination unit 130 may determine that blur has occurred in the pixel region 910 forming the point. have.
- FIG. 10 is a diagram illustrating an association between a blur image and depth information, according to an exemplary embodiment.
- the blur determining unit 130 calculates depth information of a predetermined region in the images (a, d, and g of FIG. 10), and uses the calculated depth information to calculate the images (FIG. It is possible to determine whether a blur occurs in a), (d), and (g)).
- FIGS. 10B and 10C are graphs for calculating and displaying depth information of an i region and an ii region in a first image (FIG. 10 (a)).
- Depth information calculated in both i region and ii region of the first image (FIG. 10A) does not have a perfectly constant value, but has even depth information in all pixel coordinates.
- the blur determination unit 130 may determine that no blur occurs in the first image (FIG. 10A).
- FIG. 10E and 10F are graphs for calculating and displaying depth information of an i region and an ii region in a moving second image (FIG. 10 (d)).
- Depth information calculated in both the i region and the ii region of the second image having motion (FIG. 10D) is different from that of FIG. 10B of the first image having no motion (FIG. 10A). It has a unique depth information value (such as the shape of a peak or a sudden change in slope) that was not seen in (c).
- the blur determination unit 130 may determine that blur occurs in the second image (FIG. 10 (d)).
- 10H and 10I are graphs for calculating and displaying depth information of an i region and an ii region in a third image (FIG. 10G).
- Depth information calculated in both the i region and the ii region of the third image (g) of FIG. 10 does not have a perfectly constant value similar to the depth information of the first image (a) of FIG. It has even depth information in pixel coordinates.
- the blur determination unit 130 may determine that no blur occurs in the third image (FIG. 10G).
- FIG. 11 illustrates an association between depth information and a blur model, according to an exemplary embodiment.
- the relationship between Q 1 -Q 2 and Q 3 -Q 4 may have a rhombus shape as shown in FIG. 11B.
- the relationship between the charge amounts Q 1 -Q 2 and Q 3 -Q 4 may exist on the rhombus 710 as shown in FIG. 11B.
- n (Q 1 -Q 2 ) and n (Q 3 -Q 4 ), which is a difference in the amount of charges obtained during the n periods of each control signal, may have a similar form.
- the size or shape of the rhombus may vary depending on the embodiment. In FIG. 11A, there is no blur due to movement, and the relationship between the amount of charge used to calculate depth information for each pixel may correspond to the two coordinates 720 and 730 of FIG. 11B. have.
- Figure 11 (c) includes a non-uniform value 740 due to blur, in which case the relationship between Q 1 , Q 2 , Q 3 , Q 4 is as shown in (d) of FIG. It may be displayed in the area 750 out of the rhombus.
- FIG. 11 (d) obtained from the blur image it can be seen that a value is plotted in an area other than the position of the rhombus, compared with FIG. 11 (b) obtained from the image without blur. .
- the blur processing apparatus 100 has a reference charge amount relationship indicating a relationship between the amount of charges integrated by the reflected light signal and the control signal in a situation where there is no movement of a subject, a camera, a background, or the like (eg, a blur does not occur due to movement). You can save the information in advance.
- the blur determination unit 130 may compare the relationship between the amount of charges integrated by the reflected light signal and the control signal when the subject is photographed with reference charge amount relationship information stored in advance. In this way, the blur determination unit 130 may determine whether blur occurs. For example, if the relationship between the amount of charges obtained based on the control signal in a specific pixel at the time of imaging is out of the reference charge amount relationship information, the blur determination unit 130 may determine that the blur has occurred in the corresponding pixel.
- the blur determining unit 130 has a relationship (Q 1 -Q 2 , Q 3 -Q 4 ) of the control signal charge amount as described above for each pixel. By determining whether or not the normal range (dot on the rhombus) is outside, it is possible to determine whether blur has occurred.
- the blur determining unit 130 is out of the normal range between the respective charge amounts Q 1 to Q N defined by the predetermined phase difference between the respective control signals. By determining whether or not, the blur can be immediately determined in the process of calculating the depth information of each pixel of the ToF type camera.
- the image processor 140 may remove the generated blur by referring to a normal pixel value around the blur area where the blur has occurred in the image.
- the image processor 140 may remove the generated blur by replacing a pixel value in which the blur occurs in the image with a normal pixel value in the vicinity where the blur does not occur.
- the image processor 140 may replace the depth information of the blur area in which the blur occurs with the depth value of the pixel located at the shortest distance among the normal pixels in the vicinity where the blur does not occur.
- the image processor 140 replaces the depth information of the blur area in which the blur occurs with a normal pixel value of the peripheral area in which the blur does not occur, based on a neighboring frame at a time different from the frame in which the blur occurs in the image. can do.
- the image processor 140 may replace a pixel value in which a blur occurs in a specific frame with a pixel value having the same coordinate in a previous frame or a subsequent frame.
- the blur determination unit 130 may generate a blur model for each tap structure of the ToF type camera.
- the ToF type camera may be configured in a 4-phase / 1-tap, 4-phase / 2-tap, 4-phase / 4-tap, or the like structure.
- Equation 5 a blur model of a ToF camera having a 4-phase / 1-tap structure is illustrated.
- Equation 6 a blur model of the ToF camera having a 4-phase / 2-tap structure is illustrated.
- Equation 7 a blur model of a ToF camera having a 4-phase / 4-tap structure is illustrated.
- FIG. 12 illustrates an example of removing blur in an image, according to an exemplary embodiment.
- the image processor 140 may remove the generated blur by replacing a pixel value in which the blur occurs in the image with a normal pixel value in the vicinity where the blur does not occur.
- a blurring pixel value is a value displayed inside a graph of a rhombus shape
- a surrounding normal pixel value is a value displayed on a graph of a rhombus shape.
- the image processor 140 may remove the generated blur by replacing the value displayed inside the rhombus graph with the value displayed on the rhombus graph.
- the image processor 140 may remove the generated blur by replacing the pixel value where the blur occurs with depth information of a pixel located at the shortest distance among the surrounding normal pixels.
- the image processor 140 may replace the depth information of the blurred pixel with a normal pixel value without the blur based on a neighboring frame having a different time from the blurred frame.
- FIG. 13 is a flowchart illustrating a procedure of a blur processing method according to an exemplary embodiment.
- the blur processing method may be performed by the blur processing apparatus 100 shown in FIG. 1.
- step 1310 the blur processing apparatus 100 emits an irradiation light signal to the object 104 through the light irradiation unit 102, and the emitted irradiation light signal is the object 104. Acquire a reflected light signal reflected from.
- the blur processing apparatus 100 may calculate depth information using the phase difference of the acquired reflected light signal.
- the blur processing apparatus 100 may determine whether blur occurs in an image acquired by the ToF camera using the depth information.
- the blur processing apparatus 100 may calculate depth information by calculating a phase difference between control signals having different phase differences and the reflected light signal. For example, the blur processing apparatus 100 may obtain charges received by each of the control signals having a phase difference different from that of the reflected light signal. The blur processing apparatus 100 may determine whether or not a blur occurs in the image according to whether the relationship between the acquired charge amounts and the respective charge amounts defined due to different phase differences of the control signals is out of a normal range. have.
- the blur processing apparatus 100 removes the generated blur using a normal pixel value around the blur area where the blur has occurred in the image.
- the blur processing apparatus 100 may remove the generated blur by replacing a pixel value in which the blur occurs in the image with a normal pixel value around the area in which the blur does not occur.
- the blur processing apparatus 100 may determine the depth information of the blur area in which the blur occurs, based on a neighboring frame having a different time than the frame in which the blur occurs in the image.
- the generated blur can be removed by substituting a value.
- FIG. 14 is a block diagram showing a configuration of a blur processing apparatus according to another embodiment.
- the blur processing apparatus 1400 may include a model identifier 1410, a searcher 1420, and an image processor 1430.
- the blur processing apparatus 1400 may be used in a manner of removing blur in the image when the charge amounts Q 1 , Q 2 , Q 3 , and Q 4 of the reflected light signal and the control signals are not provided.
- the model identification unit 1410 may identify a blur model corresponding to the structure of the ToF type camera.
- the ToF type camera has control signals having different phases of L (L is a natural number) generated in various ways, and is configured in an L-phase / M-tap method having M (M is a natural number) charge storage space. Can be. Therefore, the blur model may have different models according to the structure of the ToF camera, such as 4-phase / 1-tap, 4-phase / 2-tap, 4-phase / 4-tap, and the like.
- the searcher 1420 searches for a pattern associated with the blur model in the image acquired by the ToF type camera.
- the searcher 1420 may search for a pattern associated with the blur model using various pattern search methods.
- the searcher 1420 may use a pyramid-type stepwise search method in orientation and space to improve the search speed.
- the searcher 1420 may search for the pattern in the pattern search and the r-theta space by using a hough transformation.
- the image processor 1430 removes the blur from the image by filtering the blur area within the searched pattern.
- the image processor 1430 may perform filtering on an r-theta space to remove noise in the blur area. In this case, in order to make the Euclidean distance effective in the r-theta space, different weighting parameters between r and theta may be applied.
- 15 is a diagram illustrating an example of filtering a blur area using a blur model.
- the image processor 1430 filters the blur region (FIG. 15A) in which the blur is generated by the blur model (FIG. 15B), thereby removing the blur image (FIG. 15). (c)) can be obtained.
- FIG. 16 is a flowchart illustrating a procedure of a blur processing method according to another embodiment.
- the blur processing method may be performed by the blur processing apparatus 1400 shown in FIG. 14.
- the blur processing apparatus 1400 identifies a blur model corresponding to the structure of a ToF camera.
- the ToF camera has control signals having L (L is a natural number) phases different from each other, and may be configured in an L-phase / M-tap method having M (M is a natural number) charge storage spaces.
- the blur model may have different models according to the structure of a ToF camera such as 4-phase / 1-tap, 4-phase / 2-tap, 4-phase / 4-tap, and the like.
- the blur processing apparatus 1400 searches for a pattern associated with the blur model in the image acquired by the ToF camera.
- the blur processing apparatus 1400 may search for a pattern associated with the blur model using various pattern search methods.
- the blur processing apparatus 1400 may improve the search speed by using a pyramid-type stepwise search method in orientation and space.
- the blur processing apparatus 1400 removes the blur from the image by filtering a blur area in which the blur occurs in the searched pattern.
- the blur processing apparatus 1400 may perform filtering on an r-theta space to remove noise in the blur area.
- Methods according to the embodiment may be implemented in the form of program instructions that can be executed by various computer means may be recorded on a computer readable medium.
- the computer readable medium may include program instructions, data files, data structures, etc. alone or in combination.
- the program instructions recorded on the media may be those specially designed and constructed for the described embodiments, or they may be of the kind well-known and available to those having skill in the computer software arts.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Computer Networks & Wireless Communication (AREA)
- Remote Sensing (AREA)
- Electromagnetism (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Studio Devices (AREA)
- Optical Radar Systems And Details Thereof (AREA)
- Image Processing (AREA)
- Adjustment Of Camera Lenses (AREA)
Abstract
Description
Claims (20)
- 제어 신호를 생성하는 제어부;조사광 신호가 객체로부터 반사되어 돌아오는 반사광 신호에 의해 생성되는 전자를 상기 제어 신호에 따라 집적하는 센서부; 및상기 제어 신호 별로 집적된 전자의 양 간 관계를 이용하여 블러(Blur) 여부를 판단하는 블러 판단부를 포함하는, 블러 처리 장치.
- 제1항에 있어서,상기 블러 판단부는,블러가 없는 경우를 나타내는 기준 전하량 관계 정보와 상기 집적된 전자의 양 간 관계를 비교하여 블러여부를 판단하는, 블러 처리 장치.
- 제1항에 있어서,상기 조사광 신호가, ToF(Time of Flight) 방식 카메라를 통해 방출되는 경우,상기 블러 판단부는,상기 반사광 신호의 위상차를 이용하여 상기 ToF 방식 카메라를 통해 획득된 영상에 블러가 발생하였는지 여부를 판단하는, 블러 처리 장치.
- 제3항에 있어서,상기 블러 판단부는,서로 다른 위상차를 갖는 복수의 제어 신호와 상기 반사광 신호 간의 위상차를 계산하여 깊이 정보를 산출하는, 블러 처리 장치.
- 제3항에 있어서,상기 센서부는,상기 반사광 신호와 서로 다른 위상차를 갖는 복수의 제어 신호 각각에 의해 수광된 전하량을 획득하고,상기 블러 판단부는,상기 획득된 전하량과 상기 제어 신호의 서로 다른 위상차로 인하여 규정되는 각 전하량 간의 관계가 정상 범위로부터 벗어나는지 여부에 따라 상기 영상에 블러가 발생하였는지 여부를 판단하는, 블러 처리 장치.
- 제3항에 있어서,상기 센서부는,n(n은 자연수)개의 반사광 신호를 획득하고,상기 블러 판단부는,서로 다른 위상차를 갖는 복수의 제어 신호와 상기 n개의 반사광 신호 간의 위상차를 계산하여 n개의 깊이 정보를 산출하고, 산출된 n개의 깊이 정보의 평균값을 산출하는, 블러 처리 장치.
- 제6항에 있어서,상기 블러 판단부는,상기 산출된 n개의 깊이 정보 중에서 적어도 하나가 플랏(flat)하지 않는 경우, 상기 영상에 블러가 발생한 것으로 판단하는, 블러 처리 장치.
- 제1항에 있어서,상기 판단 결과 블러가 발생한 경우,ToF 방식 카메라를 통해 획득된 영상 내에서, 상기 블러가 발생한 픽셀값을 상기 블러가 발생하지 않은 주변의 정상 픽셀값으로 대체하여 상기 발생된 블러를 제거하는 영상 처리부를 더 포함하는, 블러 처리 장치.
- 제8항에 있어서,상기 영상 처리부는,상기 블러가 발생한 블러 영역의 깊이 정보를, 상기 영상 내에서 상기 블러가 발생한 프레임과 상이한 시간의 이웃 프레임에 기초하여 상기 블러가 발생하지 않은 주변의 정상 픽셀값으로 대체하는, 블러 처리 장치.
- ToF 방식 카메라의 구조에 해당하는 블러 모델을 식별하는 모델 식별부;상기 ToF 방식 카메라를 통해 획득된 영상에서 상기 블러 모델과 연관된 패턴을 검색하는 검색부; 및상기 검색된 패턴 내에 블러 영역을 필터링하여 상기 영상에서 블러를 제거하는 영상 처리부를 포함하는, 블러 처리 장치.
- 제10항에 있어서,상기 영상 처리부는,r-세타(theta) 공간 상에서 상기 블러 영역을 필터링하는, 블러 처리 장치.
- 위상이 서로 다른 복수의 제어 신호를 생성하는 단계;조사광 신호가 객체로부터 반사되어 돌아오는 반사광 신호에 의해 생성되는 전자를 상기 제어 신호에 따라 집적하는 단계; 및상기 제어 신호 별로 집적된 전자의 양 간 관계를 이용하여 블러 여부를 판단하는 단계를 포함하는, 블러 처리 방법.
- 제12항에 있어서,상기 블러 여부를 판단하는 단계는,블러가 없는 경우를 나타내는 기준 전하량 관계 정보와 상기 집적된 전자의 양 간 관계를 비교하여 블러여부를 판단하는 단계를 포함하는, 블러 처리 방법.
- 제12항에 있어서,상기 조사광 신호가, ToF 방식 카메라를 통해 방출되는 경우,상기 블러 여부를 판단하는 단계는,상기 반사광 신호의 위상차를 이용하여 상기 ToF 방식 카메라를 통해 획득된 영상에 블러가 발생하였는지 여부를 판단하는 단계를 포함하는, 블러 처리 방법.
- 제14항에 있어서,상기 블러가 발생하였는지 여부를 판단하는 단계는,서로 다른 위상차를 갖는 복수의 제어 신호와 상기 반사광 신호 간의 위상차를 계산하여 깊이 정보를 산출하는 단계를 포함하는, 블러 처리 방법.
- 제14항에 있어서,상기 블러가 발생하였는지 여부를 판단하는 단계는,획득한 n(n은 자연수)개의 반사광 신호와, 서로 다른 위상차를 갖는 복수의 제어 신호 간의 위상차를 계산하는 단계;상기 계산된 결과를 이용하여 n개의 깊이 정보를 산출하는 단계; 및상기 산출된 n개의 깊이 정보의 평균값을 산출하는 단계를 포함하는, 블러 처리 방법.
- 제12항에 있어서,상기 판단 결과 블러가 발생한 경우,ToF 방식 카메라를 통해 획득한 영상 내에서, 상기 블러가 발생한 픽셀값을 상기 블러가 발생하지 않은 주변의 정상 픽셀값으로 대체하여 상기 발생된 블러를 제거하는 단계를 더 포함하는, 블러 처리 방법.
- 제17항에 있어서,상기 발생된 블러를 제거하는 단계는,상기 블러가 발생한 블러 영역의 깊이 정보를, 상기 영상 내에서 상기 블러가 발생한 프레임과 상이한 시간의 이웃 프레임에 기초하여, 상기 블러가 발생하지 않은 주변의 정상 픽셀값으로 대체하여 상기 발생된 블러를 제거하는 단계를 포함하는, 블러 처리 방법.
- ToF 방식 카메라의 구조에 해당하는 블러 모델을 식별하는 단계;상기 ToF 방식 카메라를 통해 획득된 영상에서 상기 블러 모델과 연관된 패턴을 검색하는 단계;상기 검색된 패턴 내에 블러 발생한 블러 영역을 필터링하는 단계; 및상기 블러 영역을 필터링한 영상에서 상기 블러를 제거하는 단계를 포함하는, 블러 처리 방법.
- 제19항에 있어서,상기 검색된 패턴 내에 블러 발생한 블러 영역을 필터링하는 단계는,r-세타(theta) 공간 상에서 상기 블러 영역을 필터링하는 단계를 포함하는, 블러 처리 방법.
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/823,557 US9456152B2 (en) | 2011-07-12 | 2012-07-11 | Device and method for blur processing |
CN201280003376.7A CN103181156B (zh) | 2011-07-12 | 2012-07-11 | 模糊处理装置及方法 |
JP2014520126A JP6193227B2 (ja) | 2011-07-12 | 2012-07-11 | ブラー処理装置及び方法 |
EP12811416.2A EP2733928B1 (en) | 2011-07-12 | 2012-07-11 | Device and method for blur processing |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201161506758P | 2011-07-12 | 2011-07-12 | |
US61/506,758 | 2011-07-12 | ||
KR10-2012-0075386 | 2012-07-11 | ||
KR1020120075386A KR101929557B1 (ko) | 2011-07-12 | 2012-07-11 | 블러 처리 장치 및 방법 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2013009099A2 true WO2013009099A2 (ko) | 2013-01-17 |
WO2013009099A3 WO2013009099A3 (ko) | 2013-03-07 |
Family
ID=47506714
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2012/005512 WO2013009099A2 (ko) | 2011-07-12 | 2012-07-11 | 블러 처리 장치 및 방법 |
Country Status (2)
Country | Link |
---|---|
US (1) | US9456152B2 (ko) |
WO (1) | WO2013009099A2 (ko) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014163717A (ja) * | 2013-02-22 | 2014-09-08 | Stanley Electric Co Ltd | 距離画像生成装置および距離画像生成方法 |
US20150334372A1 (en) * | 2014-05-19 | 2015-11-19 | Samsung Electronics Co., Ltd. | Method and apparatus for generating depth image |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9922427B2 (en) * | 2014-06-06 | 2018-03-20 | Infineon Technologies Ag | Time-of-flight camera with location sensor system |
KR102272254B1 (ko) * | 2015-02-13 | 2021-07-06 | 삼성전자주식회사 | 위상 검출 픽셀을 이용하여 깊이 맵을 생성하기 위한 영상 생성 장치 |
JP2018036102A (ja) * | 2016-08-30 | 2018-03-08 | ソニーセミコンダクタソリューションズ株式会社 | 測距装置、および、測距装置の制御方法 |
KR102618542B1 (ko) | 2016-09-07 | 2023-12-27 | 삼성전자주식회사 | ToF (time of flight) 촬영 장치 및 ToF 촬영 장치에서 깊이 이미지의 블러 감소를 위하여 이미지를 처리하는 방법 |
CN111580117A (zh) * | 2019-02-19 | 2020-08-25 | 光宝电子(广州)有限公司 | 飞时测距感测系统的控制方法 |
Family Cites Families (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS5979208A (ja) * | 1982-10-29 | 1984-05-08 | Canon Inc | ぶれ検知装置 |
US7042507B2 (en) * | 2000-07-05 | 2006-05-09 | Minolta Co., Ltd. | Digital camera, pixel data read-out control apparatus and method, blur-detection apparatus and method |
US7283213B2 (en) | 2005-02-08 | 2007-10-16 | Canesta, Inc. | Method and system to correct motion blur and reduce signal transients in time-of-flight sensor systems |
US20060241371A1 (en) | 2005-02-08 | 2006-10-26 | Canesta, Inc. | Method and system to correct motion blur in time-of-flight sensor systems |
EP1748304A1 (en) * | 2005-07-27 | 2007-01-31 | IEE International Electronics & Engineering S.A.R.L. | Method for operating a time-of-flight imager pixel |
US8325220B2 (en) | 2005-12-02 | 2012-12-04 | Koninklijke Philips Electronics N.V. | Stereoscopic image display method and apparatus, method for generating 3D image data from a 2D image data input and an apparatus for generating 3D image data from a 2D image data input |
US7450220B2 (en) * | 2006-02-08 | 2008-11-11 | Canesta, Inc | Method and system to correct motion blur and reduce signal transients in time-of-flight sensor systems |
JP4321540B2 (ja) | 2006-03-30 | 2009-08-26 | 株式会社豊田中央研究所 | 物体検出装置 |
JP5098331B2 (ja) | 2006-12-28 | 2012-12-12 | 株式会社豊田中央研究所 | 計測装置 |
KR100942271B1 (ko) | 2007-07-30 | 2010-02-16 | 광운대학교 산학협력단 | 깊이 정보를 이용한 집적 영상 복원 방법 및 장치 |
JP5280030B2 (ja) | 2007-09-26 | 2013-09-04 | 富士フイルム株式会社 | 測距方法および装置 |
JP4895304B2 (ja) | 2007-09-26 | 2012-03-14 | 富士フイルム株式会社 | 測距方法および装置 |
JP5509487B2 (ja) | 2008-06-06 | 2014-06-04 | リアルディー インコーポレイテッド | 立体視画像のブラー強化 |
KR100987921B1 (ko) | 2008-12-31 | 2010-10-18 | 갤럭시아커뮤니케이션즈 주식회사 | 선택적 움직임 검색영역을 이용한 움직임 보상기법이 적용되는 동영상 압축부호화장치및 복호화 장치와 움직임 보상을 위한 선택적 움직임 검색영역 결정방법. |
US8203602B2 (en) | 2009-02-06 | 2012-06-19 | Robert Bosch Gmbh | Depth-aware blur kernel estimation method for iris deblurring |
US7912252B2 (en) * | 2009-02-06 | 2011-03-22 | Robert Bosch Gmbh | Time-of-flight sensor-assisted iris capture system and method |
US8229244B2 (en) | 2009-03-30 | 2012-07-24 | Mitsubishi Electric Research Laboratories, Inc. | Multi-image deblurring |
KR101590767B1 (ko) | 2009-06-09 | 2016-02-03 | 삼성전자주식회사 | 영상 처리 장치 및 방법 |
JP5281495B2 (ja) | 2009-06-18 | 2013-09-04 | キヤノン株式会社 | 画像処理装置およびその方法 |
CN101582165B (zh) | 2009-06-29 | 2011-11-16 | 浙江大学 | 基于灰度图像与空间深度数据的摄像机阵列标定算法 |
US20110007072A1 (en) | 2009-07-09 | 2011-01-13 | University Of Central Florida Research Foundation, Inc. | Systems and methods for three-dimensionally modeling moving objects |
JP5760167B2 (ja) | 2009-07-17 | 2015-08-05 | パナソニックIpマネジメント株式会社 | 空間情報検出装置 |
JP5760168B2 (ja) | 2009-07-17 | 2015-08-05 | パナソニックIpマネジメント株式会社 | 空間情報検出装置 |
KR101565969B1 (ko) | 2009-09-01 | 2015-11-05 | 삼성전자주식회사 | 깊이 정보를 추정할 수 있는 방법과 장치, 및 상기 장치를 포함하는 신호 처리 장치 |
EP2395369A1 (en) * | 2010-06-09 | 2011-12-14 | Thomson Licensing | Time-of-flight imager. |
JP5635937B2 (ja) * | 2011-03-31 | 2014-12-03 | 本田技研工業株式会社 | 固体撮像装置 |
-
2012
- 2012-07-11 WO PCT/KR2012/005512 patent/WO2013009099A2/ko active Application Filing
- 2012-07-11 US US13/823,557 patent/US9456152B2/en active Active
Non-Patent Citations (2)
Title |
---|
None |
See also references of EP2733928A4 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014163717A (ja) * | 2013-02-22 | 2014-09-08 | Stanley Electric Co Ltd | 距離画像生成装置および距離画像生成方法 |
US20150334372A1 (en) * | 2014-05-19 | 2015-11-19 | Samsung Electronics Co., Ltd. | Method and apparatus for generating depth image |
US9746547B2 (en) * | 2014-05-19 | 2017-08-29 | Samsung Electronics Co., Ltd. | Method and apparatus for generating depth image |
Also Published As
Publication number | Publication date |
---|---|
WO2013009099A3 (ko) | 2013-03-07 |
US20130242111A1 (en) | 2013-09-19 |
US9456152B2 (en) | 2016-09-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2013009099A2 (ko) | 블러 처리 장치 및 방법 | |
WO2020085881A1 (en) | Method and apparatus for image segmentation using an event sensor | |
WO2019172725A1 (en) | Method and apparatus for performing depth estimation of object | |
US9491440B2 (en) | Depth-sensing camera system | |
WO2016060439A1 (ko) | 영상 처리 방법 및 장치 | |
WO2015034269A1 (ko) | 영상 처리 방법 및 장치 | |
WO2017034220A1 (en) | Method of automatically focusing on region of interest by an electronic device | |
US8433185B2 (en) | Multiple anti-shake system and method thereof | |
WO2017007096A1 (en) | Image capturing apparatus and method of operating the same | |
WO2016060366A1 (en) | Imaging apparatus and imaging method | |
KR101929557B1 (ko) | 블러 처리 장치 및 방법 | |
CN103460105A (zh) | 成像装置及其自动对焦控制方法 | |
JP2006226965A (ja) | 画像処理装置、コンピュータプログラム、及び画像処理方法 | |
WO2021118111A1 (ko) | 이미지 처리 장치 및 이미지 처리 방법 | |
WO2020076128A1 (en) | Method and electronic device for switching between first lens and second lens | |
JP2001249265A (ja) | 測距装置 | |
WO2017209509A1 (ko) | 영상 처리 장치, 그의 영상 처리 방법 및 비일시적 컴퓨터 판독가능 기록매체 | |
JP2011169842A (ja) | フリッカー測定方法およびその装置 | |
EP3066508A1 (en) | Method and system for creating a camera refocus effect | |
JP2001175878A (ja) | 画像特徴抽出装置、画像特徴抽出方法、監視検査システム、半導体露光システム、およびインターフェースシステム | |
WO2017086522A1 (ko) | 배경스크린이 필요 없는 크로마키 영상 합성 방법 | |
JP2000341720A (ja) | 3次元画像入力装置および記録媒体 | |
JP4085720B2 (ja) | デジタルカメラ | |
WO2013081383A1 (ko) | 깊이 영상을 고해상도로 변환하는 방법 및 장치 | |
CN108476286A (zh) | 一种图像输出方法以及电子设备 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 12811416 Country of ref document: EP Kind code of ref document: A2 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13823557 Country of ref document: US |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012811416 Country of ref document: EP |
|
ENP | Entry into the national phase |
Ref document number: 2014520126 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |