WO2021239029A1 - 幻影反射补偿方法及设备 - Google Patents

幻影反射补偿方法及设备 Download PDF

Info

Publication number
WO2021239029A1
WO2021239029A1 PCT/CN2021/096220 CN2021096220W WO2021239029A1 WO 2021239029 A1 WO2021239029 A1 WO 2021239029A1 CN 2021096220 W CN2021096220 W CN 2021096220W WO 2021239029 A1 WO2021239029 A1 WO 2021239029A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
phantom
reflection
phantom reflection
model
Prior art date
Application number
PCT/CN2021/096220
Other languages
English (en)
French (fr)
Inventor
萨里尼约瑟夫
Original Assignee
索尼半导体解决方案公司
萨里尼约瑟夫
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 索尼半导体解决方案公司, 萨里尼约瑟夫 filed Critical 索尼半导体解决方案公司
Priority to EP21812623.3A priority Critical patent/EP4138030A4/en
Priority to US17/926,174 priority patent/US20230196514A1/en
Priority to JP2022572752A priority patent/JP2023527833A/ja
Priority to CN202180036034.4A priority patent/CN115917587A/zh
Publication of WO2021239029A1 publication Critical patent/WO2021239029A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/20Linear translation of whole images or parts thereof, e.g. panning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20224Image subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/61Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"

Definitions

  • the present disclosure relates to image processing, and particularly to image compensation processing.
  • object detection/recognition/comparison/tracking in static images or a series of moving images has been widely and importantly applied in the fields of image processing, computer vision, and pattern recognition, and has played an important role in it .
  • the object can be a human body part, such as face, hand, body, etc., other living things or plants, or any other objects that you want to detect.
  • Object recognition is one of the most important computer vision tasks. Its goal is to identify or verify specific objects based on the input photos/videos, and then accurately learn relevant information about the objects. In particular, in some application scenarios, when performing object recognition based on an object image taken by a camera device, it is necessary to be able to accurately recognize the detailed information of the object from the image, and then accurately recognize the object.
  • the image obtained by the current camera device often contains various noises, and the presence of the noise makes the image quality worse, which may result in inaccurate or even wrong detail information, and then affect the imaging and recognition of the object.
  • An object of the present disclosure is to improve image processing to further suppress noise in the image, especially noise related to phantom reflection, and then improve image quality.
  • the captured images may have ghosts, which in turn leads to poor image quality.
  • the present disclosure can use the ghost reflection compensation model to compensate the image, effectively remove the ghost in the image, and obtain a high-quality image.
  • an electronic device for compensating for phantom reflection in an image captured by a camera including a processing circuit configured to: weight an image to be compensated containing phantom reflection by using a phantom reflection compensation model,
  • the phantom reflection compensation model is related to the intensity distribution of the phantom reflection in the image caused by the light reflection in the camera device during shooting; and the image to be compensated and the weighted image are combined to eliminate the phantom reflection in the image.
  • a method for compensating for phantom reflection in an image taken by a camera including the following steps: a calculation step for weighting an image to be compensated containing phantom reflection by using a phantom reflection compensation model , Wherein the phantom reflection compensation model is related to the intensity distribution of the phantom reflection in the image caused by the light reflection in the camera device during shooting; and the compensation step is used to combine the image to be compensated and the weighted image to eliminate the image Phantom reflections in.
  • a method including at least one processor and at least one storage device, the at least one storage device having instructions stored thereon, which, when executed by the at least one processor, can cause the at least one A processor executes the method as described herein.
  • a storage medium storing instructions that, when executed by a processor, can cause the method as described herein to be performed.
  • FIG 1 shows an overview of ToF technology.
  • FIG. 2A shows light reflection caused by a close object in close object shooting.
  • Figure 2B shows the light reflection caused by the photographic filter.
  • Fig. 3 shows a schematic diagram of the ghost phenomenon in an image.
  • 4A to 4C show examples of ghost reflections in the confidence image and the depth image.
  • FIG. 5A shows an image processing flow including scattering compensation in the solution of the present disclosure.
  • FIG. 5B shows an exemplary scattering compensation operation in the solution of the present disclosure.
  • FIG. 5C shows the result of the scattering compensation in the solution of the present disclosure.
  • FIG. 6 shows a flowchart of a method for phantom reflection compensation according to an embodiment of the present disclosure.
  • FIG. 7 shows a block diagram of an electronic device capable of phantom reflection compensation according to an embodiment of the present disclosure.
  • FIG. 8 shows an illustration of a phantom reflection compensation model according to an embodiment of the present disclosure.
  • FIG. 9 shows the extraction of a phantom reflection compensation model according to an embodiment of the present disclosure.
  • FIGS. 10A and 10C show schematic diagrams of an exemplary image rotation operation according to an embodiment of the present disclosure.
  • FIG. 11 shows an image processing flow including phantom reflection compensation according to an embodiment of the present disclosure.
  • FIG. 12 shows an image processing flow including phantom reflection compensation according to an embodiment of the present disclosure.
  • FIG. 13 shows the execution result of phantom reflection compensation according to an embodiment of the present disclosure.
  • 14A and 14B show phantom reflection compensation for a dToF sensor according to an embodiment of the present disclosure.
  • FIG. 15 shows phantom reflection compensation for point ToF according to an embodiment of the present disclosure.
  • FIG. 16 shows a photographing device according to an embodiment of the present disclosure.
  • FIG. 17 shows a block diagram showing an exemplary hardware configuration of a computer system capable of implementing an embodiment of the present invention.
  • an image may refer to any one of a variety of images, such as a color image, a grayscale image, and the like. It should be noted that in the context of this specification, the type of image is not specifically limited, as long as such an image can be subjected to processing for information extraction or detection.
  • the image may be an original image or a processed version of the image, such as a version of an image that has undergone preliminary filtering or preprocessing before performing the operations of this application on the image.
  • noise When a scene is photographed by a camera device, noise is usually present in the obtained image, for example, it may include the phenomenon of scattering and ghost reflection (ghost reflection). Although in some cases these noise phenomena can increase the artistic effect of captured images, such as some landscape photos taken by RGB sensors, in many cases, compared to RGB sensors, this noise is more important for the use of light. All sensors that measure distance (such as sensors based on ToF technology, structured light sensors for 3D measurement, etc.) are particularly harmful. The following will briefly describe the time of flight (ToF, Time to Fight) technology and the noise problem that occurs when the ToF sensor takes pictures in conjunction with the accompanying drawings. It should be noted that these noise problems also exist for image sensors based on other technologies, such as structured light sensors and RGB sensors, due to the same principle, and for the sake of brevity, this disclosure will not be described separately.
  • ToF Time to Fight
  • the light emission can be accomplished, for example, by pulses (direct time of flight) or continuous waves (indirect time of flight).
  • noise phenomena may occur when taking pictures, such as scattering and phantom reflections. Such noise phenomenon may be caused by light reflection in the camera.
  • FIG. 2A a scene containing three objects is photographed by a camera module, which includes a lens and a sensor, that is, an imager.
  • the distances between objects 1, 2 and 3 from the imager are r(1), r(2) and r(3), the light emitted towards these three objects is reflected by the object and returns to the corresponding position of the imager in the camera module, namely the imaging positions S(1), S(2) and S(3) .
  • the object 1 is very close to the camera module, so that the reflected light intensity is high, and it will bounce inside the module (for example, between the lens and the imaging device). In the captured image, the signal from object 1 will be scattered around its location and will be mixed with the signals from objects 2 and 3. The ToF sensor will combine these signals and provide the wrong depth for objects 2 and 3 (the measured depth is between the distance r(1) and the distance r(2) or r(3)).
  • a photographic filter (photographic filter) is often set before the lens in a camera, which may cause phantom reflections.
  • Figure 2B under normal circumstances, light will enter the imaging point on the sensor through the filter and lens, as shown by the solid line and the arrow above, but part of the light will be reflected from the imaging point toward the lens and transmitted through the lens , As shown by the reverse arrow.
  • the signal will be reflected by the filter toward the lens, and then incident on the sensor through the lens, as shown by the dotted arrow, just like light from different directions is imaged on the sensor, In addition to the imaging point, it is also generated as a ghost image.
  • FIG. 4A shows an RGB image of a scene image taken by a mobile phone, the shooting mode is bokeh mode, and the integration time is 300 ⁇ s.
  • Figure 4B shows a confidence image of the scene graph.
  • the confidence image indicates the confidence of the depth information in the scene graph.
  • each pixel in the confidence graph indicates the information provided by each pixel in the scene graph. Confidence of depth. It can be seen that the right side indicates a close object, which is displayed as a bright white due to the close distance. Due to the influence of the close object as described above, the scattering effect (the messy white point close to the white) is reflected in the middle part of the image.
  • FIG. 4C shows a depth image of the scene graph, which indicates depth information of objects in the scene, where each pixel in the depth image indicates the distance from the camera to the object in the scene. It can be seen that, on the left side of the depth map, there will be a gray shadow part similar to an object caused by phantom reflection, which is often mistaken for providing depth information. It can be seen from the above that in the captured image, the scattering part is located between the object image and the phantom reflection part. Due to the presence of the phantom reflection, the wrong depth information of the close object will be provided on the left side of the image. The depth is usually very shallow, which leads to the inability to correctly identify the information of the close object, especially the depth information.
  • FIG. 5A shows a scatter compensation flow in the image processing proposed in the present disclosure, in which scatter compensation is performed on ToF raw data, and then subsequent data processing is performed on the scatter compensated data to obtain a confidence image and a depth image.
  • the subsequent data processing may include processing known in the art for generating a confidence image and a depth image, which will not be described in detail here.
  • the scattering effect may be caused by the reflection of light from close objects in the camera device between the sensor and the lens. This will produce some blur around the object, so the edges of the image are not clear.
  • Modeling can be performed by an appropriate function capable of describing the characteristics of such blur generated by dots or pixels, and the function may be, for example, a PSF function (point spread function).
  • PSF function point spread function
  • algorithms can be applied to eliminate the specific blur generated by all points/pixels in the image.
  • the algorithm may be, for example, a deconvolution algorithm. It should be pointed out that the modeling and compensation of scattering can be performed by using other suitable functions and algorithms known in the art, which will not be described in detail here.
  • FIG. 5B shows an exemplary scattering compensation operation according to an embodiment of the present disclosure.
  • the deconvolution algorithm corresponding to the modeling function (for example, PSF function) (for example, inverse transform)
  • this scattering will be eliminated, so that the edges of the white patches will be clearer and the blur in the image will be eliminated .
  • the scattering effect is compensated, as shown in the image on the right.
  • FIG. 5C shows the result of the scattering compensation, and the image contains a confidence map and a depth map corresponding to the RGB image shown in FIG. 4A.
  • the depth map it can be seen that even if the scattering noise in the image is removed by the scattering compensation, there is still a ghost reflection (shadow part) on the left side of the image, and the ghost will still cause an erroneous depth measurement.
  • one goal of the present disclosure is to be able to effectively eliminate ghost reflections.
  • the present disclosure proposes to use the extracted ghost reflection compensation model to weight the data/image to be processed, and to use the weighted data/image to compensate the data/image to be processed, thereby effectively eliminating the ghost reflection.
  • this phenomenon of phantom reflection is particularly caused by the reflection caused by the filter in the camera system. Therefore, for the camera system that additionally uses the filter, the phantom reflection compensation technique according to the present disclosure It is particularly advantageous regardless of the type of sensor, whether it is a ToF sensor, a structured light sensor, an RGB sensor, or other types of sensors.
  • FIG. 6 shows a flowchart of a method for compensating for ghost reflection in an image taken by a camera according to an embodiment of the present disclosure.
  • the method 600 may include a calculation step S601 for weighting the image to be compensated containing ghost reflections by using a ghost reflection compensation model; and a compensation step S602 for combining the image to be compensated and the weighted image to eliminate ghosts in the image reflection.
  • the camera device to which the technical solution of the present disclosure can be applied may include various types of optical camera devices, as long as the camera device may cause phantom reflection due to light reflection when taking an image.
  • the imaging device may include the aforementioned camera using a photographic filter.
  • the camera device may include the aforementioned camera for 3D imaging.
  • the camera device may include the aforementioned camera including a sensor based on ToF technology.
  • the camera device can also correspond to the aforementioned camera for close-up photography. And so on.
  • the image to be compensated can be any appropriate image, such as the original image obtained by the camera, or an image that has been subjected to specific processing on the original image, such as preliminary filtering, anti-aliasing, color adjustment, contrast adjustment, normalization, etc. Wait.
  • the pre-processing operation may also include other types of pre-processing operations known in the art, which will not be described in detail here.
  • the phantom reflection compensation model substantially reflects the characteristics of light reflection that causes phantom reflection that occurs in the imaging device, that is, it is a model obtained based on modeling the light reflection that causes phantom reflection.
  • the light reflection may be caused by the photographic filter and/or lens, that is, the light reflection characteristic corresponds to the characteristic of the photographic filter and/or lens, so the phantom reflection compensation model is essentially based on the photographic filter and/or lens.
  • a model obtained by modeling the characteristics of filters and/or lenses It should be pointed out that the model is not limited to this.
  • the model is also equivalent to based on such other components or other optics.
  • the characteristics of the phenomenon are modeled.
  • the phantom reflection compensation model may be related to the phantom reflection intensity distribution in the image.
  • the phantom reflection compensation model may be related to the intensity distribution in the image affected by the phantom reflection, such as the intensity distribution of the entire image, or especially the intensity distribution at the source object and the phantom position.
  • the phantom reflection compensation model may indicate the phantom reflection intensity factor at a specific sub-region in the image, wherein the sub-region includes at least one pixel.
  • the specific sub-region may be each sub-region covering the entire image.
  • the specific sub-region is a sub-region in the image corresponding to the source object and the phantom reflection position.
  • the phantom reflection intensity factor may be derived based on the intensity distribution in the image, preferably based on the intensity distribution of the phantom reflection in the image, and may be referred to as being set so as to minimize compensation.
  • the variation in the scene image of the scene especially the factor that minimizes the intensity variation of the image at the location of the phantom reflection and the intensity variation of the image at the adjacent area to remove the phantom reflection.
  • the variation may refer to an intensity variation, for example, an image intensity variation of a phantom reflection area compared with an adjacent area around the phantom reflection.
  • the factor can be referred to as the phantom reflection compensation factor.
  • the phantom reflection compensation model can be expressed in various forms.
  • the phantom reflection compensation model will be exemplarily described below in conjunction with the accompanying drawings.
  • Figure 8 shows an exemplary phantom reflection compensation model according to an embodiment of the present disclosure, where (a) shows a planarized representation of the model, (b) shows a three-dimensional graphical representation of the model, where the horizontal axis And the vertical axis indicates the plane size of the model, which corresponds to the size of the image, and the vertical axis indicates the value of the phantom reflection intensity factor of the model.
  • the model may include various parameters related to intensity factor, central shift, and size.
  • these parameters should be set so that the model matches the characteristics of the components in the camera as much as possible, as described above.
  • the parameters related to the center offset may include parameters cx and cy, which indicate the offset of the reference position relative to the center of the image for the image transformation operation (for example, including rotation and shift) in the compensation operation, for example, in the horizontal and vertical directions, respectively.
  • the position indicated by the parameters cx and cy actually corresponds to the central axis of the image to be rotated, which means that the image will be eccentrically rotated.
  • cx and cy may directly indicate the offset of the center axis used to rotate the image relative to the image center in the subsequent processing, so that the center axis can be moved to the image center according to the offset before the rotation is performed.
  • cx and cy can correspond to the center point position shown in the model, such as the center point position shown in the flattened representation in FIG. 8, so that the center axis can be moved from this position to the image before rotating center.
  • cx and cy may at least depend on the characteristics of the lens, and of course may also be related to the characteristics of other components.
  • the purpose of determining the center offset is to properly position the image so that the phantom in the rotated image can be aligned with the target in the original image.
  • the values of cx and cy can be determined, for example, through experiments or calibration measurements.
  • the size-related parameters may include the parameters width and length, which correspond to the width and length (in pixels) of the image, respectively.
  • the width and height can also indicate the width and length of the model illustration, respectively, as shown in the plane in the three-dimensional illustration in Figure 8(b).
  • the width and length can depend on the pixel arrangement of the sensor.
  • the parameter related to the intensity factor may include a parameter indicating the intensity factor distribution corresponding to the image.
  • the intensity factor distribution can be expressed by an appropriate distribution function to indicate, for example, the phantom reflection intensity factor at each pixel position of the image, as shown in Fig. 8(b).
  • the intensity factor distribution is set such that the phantom reflection intensity factor at the sub-region close to the center of the image is higher than the phantom reflection intensity factor at the sub-region close to the edge of the image.
  • the phantom reflections will show different light intensities depending on where they appear, especially gradually weakening from the center to the edges, and the adverse effects on the depth measurement will also increase with the intensity. Become smaller and gradually become smaller. Therefore, by setting the intensity factor as described above, the phantom reflection in the image can be appropriately weakened or even eliminated. For example, the greater the intensity of the phantom reflection, the larger the factor is to weaken and eliminate it, thereby providing accurate and accurate Effective phantom reflection compensation.
  • the intensity factor distribution may be determined according to a specific distribution function.
  • at least one specific distribution function may be included, and each function may have a corresponding weight.
  • the intensity factor ⁇ f(1)+ ⁇ f(2)+..., where f(1) and f(2) respectively indicate specific functions, and ⁇ and ⁇ respectively indicate the weights of the functions.
  • the parameters of the distribution function and the weights used for the distribution function can be set, for example, according to the characteristics of the light reflection that causes the phantom reflection in the camera, especially the optical characteristics of the components that cause the reflection, so as to match as much as possible ( Approximate) this feature.
  • the value of the corresponding parameter can be determined according to the empirical value obtained in the pre-test or experiment, and the value of the corresponding parameter can also be adjusted on the basis of the empirical value through a further calibration operation.
  • the intensity factor distribution follows a Gaussian distribution.
  • the first parameter is std, which is the standard deviation of the Gaussian function
  • mu is the average value of the Gaussian function.
  • these two Gaussian function parameters std, mu can be related to the intensity of the reflected light that causes phantom reflection in the camera.
  • the parameters std, mu may depend on the characteristics (for example, optical characteristics) of the components that may cause light reflection in the imaging device, such as the characteristics of the above-mentioned lens, photographic filter, and the like.
  • the expression of the intensity factor may include a specific number of Gaussian functions, and each Gaussian function may be weighted accordingly.
  • the model can be represented as follows:
  • the parameters of the Gaussian function given in the intensity factor expression can be selected depending on the optical characteristic curves of the aforementioned optical components, such as filters and lenses.
  • the intensity factor distribution is made to better correspond to (for example, reverse matching) the optical characteristic curve in order to eliminate the influence of light reflection caused by the optical characteristic.
  • the parameter can be set to an initial value based on experience, and then adjusted on the basis of the empirical value through further calibration operations.
  • Gaussian function to express the intensity factor in the aforementioned phantom reflection compensation model is only exemplary, and other distribution functions may also be used in the present disclosure, as long as the distribution function can make the model accurately match the phantom reflection in the image.
  • the intensity distribution in particular, the matching reflects the characteristic of light reflection that causes phantom reflection in the imaging device, for example, the characteristic of a component that causes light reflection that causes phantom reflection.
  • the distribution function may adopt other functions having a normal distribution.
  • functions with other distributions such as Cauchy distribution, gamma distribution, etc., may be used.
  • the phantom reflection compensation model is extracted from a predetermined number of calibration images.
  • the calibration image is obtained by photographing a specific scene for calibration by a photographing device.
  • the predetermined number can be specified empirically, or the number used in the previous calibration can be used.
  • Figure 9 shows that the model is extracted from multiple images.
  • the left side indicates the images used to extract the phantom reflection compensation model. They are images obtained for a calibration scene with a white chart.
  • the white chart in each scene is in a different position, and they are located at the four corners. And the center position, each image contains both bright white patches that indicate white test cards and light patches that indicate phantom reflections.
  • the number and arrangement of images for calibration are not limited to this, as long as the information of phantom reflection can be appropriately reflected.
  • the white test card can be arranged in more locations to obtain more calibration images, so as to reflect the phantom reflection information in the scene in more detail.
  • the phantom reflection compensation model can be determined such that the intensity variation of a certain number of calibration images after applying the model meets certain requirements.
  • the intensity variation can refer to the phantom reflection area and the The image intensity variation compared to the adjacent area around the phantom reflection, that is, the difference in image intensity between the phantom reflection area and the adjacent area around the phantom reflection.
  • the phantom reflection compensation model can also be determined to eliminate or alleviate the phantom reflection in the image after applying the model.
  • the elimination or alleviation of the phantom reflection may refer to the location of the phantom reflection.
  • the depth/RGB information measured at the location is consistent or close to the real scene. That is, the phantom reflection compensation model is extracted on the condition that the intensity variation (and optionally or additionally, the degree of phantom reflection resolution) meets specific requirements.
  • meeting the requirement of intensity variation may refer to the statistical value of variation obtained for all specific number of scene images or at least some of the scene images, such as the sum of variation, average value, etc. of these scene images, satisfying the requirement.
  • a specific requirement may mean that the intensity variation is less than a specific threshold, or a specific requirement may mean that the variation of the image is minimal. Therefore, satisfying the specific requirement means that the phantom reflection area and the adjacent area around the phantom reflection have basically the same intensity, small variation, smoothness, and no boundaries, which can basically eliminate the influence of the phantom reflection.
  • the model extraction process can be performed in various ways. According to the embodiment, it may be performed in an iterative manner. As an example, you can set the initial value of each parameter of the phantom reflection compensation model, use the set model to calculate the aforementioned image intensity variation (and optionally or additionally, the degree of phantom reflection elimination), and verify the image intensity variation ( And optionally or additionally, whether the degree of phantom reflection resolution meets specific requirements. If it is not satisfied, continue to adjust the set value of the parameter, and perform the next operation until the intensity variation meets the specific requirement, and determine that the corresponding model at this time is the desired phantom reflection compensation model for subsequent image compensation processing.
  • model parameters that can be determined through the iterative operation may include at least related parameters of the distribution function of the model, such as the parameters of the Gaussian function itself and the weight of each Gaussian function in the presence of two or more Gaussian functions.
  • the calibration image contains the white test card and its phantom reflection, which can be used as an image for extracting the model. It should be pointed out that such a determination operation procedure can be performed separately for each image used for model extraction.
  • the image transformation is performed according to the center offset parameter of the phantom reflection compensation model.
  • the center offset parameter indicates the offset of the rotation center relative to the image center. Therefore, the image transformation essentially indicates that the image performs eccentric rotation, that is, rotates around a central axis deviating from the image center.
  • FIG. 10B shows the case of direct rotation by 180 degrees, where the position of the cross symbol corresponds to the position of the eccentric axis indicated by the center offset, and the image transformation may refer to directly rotating around the eccentric axis to obtain the final image.
  • Image transformation can also be carried out by shifting and rotating operations, that is, the operating process of shifting, rotating, and shifting again.
  • the center of rotation is first shifted according to the parameter value (for example, moved to the center according to cx and cy), and then rotated around the center axis at the center, and then the shifted center is inverted according to the parameter value Shift (that is, move the center according to -cx and -cy).
  • the rotation can be rotated at any angle, as long as the source object and the phantom reflection position in the rotated image overlap with the phantom reflection position and the source object position in the previous image, respectively.
  • the rotation can be rotated by 180°.
  • the shift and rotation make the phantom in the shifted and rotated image correspond to the position of the object in the original image, and the position of the object in the shifted and rotated image corresponds to the position of the phantom in the original image Corresponding.
  • the intensity factor can be used to weight the high intensity of the white patch, and the weighted intensity value can be used to suppress the low intensity at the phantom position, which is effective Suppress the phantom intensity to achieve the elimination of phantom.
  • the weighted intensity value obtained by using the intensity factor to weight other positions in the shifted and rotated image is very small, which can ensure that the original image is excluded from the phantom position in the process of suppressing the phantom in the original image.
  • the intensity values of other positions have a smaller effect.
  • the transformed image is multiplied by the phantom reflection intensity factor of the phantom reflection compensation model (that is, the aforementioned weighting).
  • the factor at each position in the phantom reflection compensation model is multiplied by the pixel intensity of the corresponding position of the transformed image to obtain an intensity-scaled image.
  • the original to-be-compensated image is subjected to intensity subtraction corresponding to the rotated and intensity-scaled image. For example, subtract the intensity of the corresponding area of the intensity-scaled image (for example, the pixel position of the corresponding position after shifting and rotating) from the intensity at the area in the original image to be compensated. Thereby, the compensated image can be obtained, and the intensity variation therein can be calculated, especially the intensity variation between the phantom reflection position and the adjacent area around the phantom reflection position (and optionally or additionally, the degree of phantom reflection resolution) .
  • the intensity variation of each image in this model extraction operation (and optionally or additionally, the degree of phantom reflection elimination) can be obtained, and then the intensity of these images can be judged Whether the statistical data of the variation (and optionally or additionally, the degree of phantom reflection resolution) meets certain conditions.
  • the intensity variation data of the statistical image is less than a predetermined threshold, and/or the degree of phantom resolution is greater than a corresponding predetermined threshold. If it is, it can be considered that the currently used compensation model is the desired one, and then the model extraction operation is stopped, and the desired model is used as the phantom reflection compensation model used in the actual shooting process. If not, you can adjust the parameters of the model step by step, and then repeat the above process until the statistical data of intensity variation meets the threshold requirements.
  • the statistical data of the intensity variation determined by this extraction operation is no longer smaller than the previous one, and/or the statistical data of the degree of phantom resolution is not smaller than the previous one. If it becomes larger, it can be considered that the statistical data of intensity variation is minimized, and the statistical data of the degree of phantom resolution is maximized, and the model extraction operation is stopped, and the compensation model corresponding to the previous operation is used as the final compensation model.
  • the initial value, step size, etc. of the model parameters in the above iterative operation can be set to any appropriate value, as long as it helps iterative convergence.
  • all model parameters can be changed at the same time each time, or only one or more parameters can be changed each time.
  • the former may correspond to the situation where all model parameters are determined simultaneously through iteration, and the latter may correspond to the situation where one or more parameters are determined first through iteration, and then other parameters are determined through iteration on this basis.
  • the phantom reflection compensation model can be used to construct a minimization equation, and when the phantom in all scenes is resolved to a predetermined degree and the image intensity variation is minimized by solving the equation, the desired phantom reflection compensation model will be obtained.
  • at least one of cx, cy, and the weight of each Gaussian function can be used as a variable to construct a system of equations.
  • the intensity distribution in the image can be expressed as a vector or matrix, and the multiplication of the image and the model as described above with reference to FIG.
  • the operation of intensity variation is expressed in a vector or matrix manner, so that an appropriate method can be applied to minimize the solution, such as the least square method.
  • the phantom reflection compensation model may be determined before the camera device is used by the user, such as during the production process, during the factory test, and so on. As an example, it may be performed together with other calibration work (for example, for ToF cameras, temperature compensation, phase gradient, cycle error, etc.) during the production process. In this way, the phantom reflection compensation model can be pre-built and stored in an imaging device, such as a camera.
  • the phantom reflection compensation model may be determined while the camera is used by the user.
  • the user may be prompted to calibrate the camera. Therefore, the user can shoot the calibration image according to the operation instruction, thereby extracting the phantom reflection compensation model from the captured image.
  • the user may be prompted to update the model.
  • the phantom reflection compensation model can be updated or pushed during product maintenance service performed by the camera device.
  • the above-mentioned model extraction can be performed Process to update or build the model.
  • the phantom reflection compensation model can be equivalent to characterizing the characteristics of the components that cause the light reflection causing the phantom reflection in the imaging device, particularly the characteristics of the lens and the photographic filter.
  • the phantom reflection compensation model corresponds to the lens and/or photographic filter in the camera device.
  • the constructed phantom reflection compensation model can be relatively fixed, especially if it remains unchanged during the shooting.
  • the phantom reflection compensation model also needs to be updated accordingly when such components are replaced by the imaging device.
  • the user may be automatically or prompted to extract the phantom compensation model corresponding to the replaced part when the part is replaced, which may be executed as described above.
  • the model corresponding to the replaced part can be automatically selected. For example, a set of phantom compensation models corresponding to all filters and/or lenses applicable to the camera system is pre-stored in the camera system. In this way, after the camera system replaces the filter and/or lens, the phantom reflection compensation model corresponding to the replaced filter and/or lens can be automatically selected from the stored set for application.
  • the characteristics of the lens may change, which in turn affects the center shift parameter, so the filter and/or lens are being replaced.
  • the user can still be automatically or prompted to extract a new model without automatic selection. For example, it can be pre-set by the system or prompted by the user to perform this operation. For example, it can be preset in the system to automatically update the model under any circumstances. Or, for example, the user may be prompted whether to perform model calibration or automatically select a model.
  • the model can be applied to further optimize the captured image and improve the image quality.
  • the corresponding phantom reflection intensity scaling factor in the phantom reflection compensation model can be used to perform intensity scaling, thereby obtaining an intensity-scaled image As a weighted image.
  • the image to be compensated may be rotated; and the phantom reflection compensation model may be used to weight (for example, multiply) the rotated image to obtain a weighted image.
  • the compensated image is obtained by subtracting pixel intensities at corresponding positions in the image to be compensated and the weighted image.
  • the operations such as rotation, multiplication, and subtraction can be performed in a manner similar to that described above with reference to FIG. 10, except that the input image on the left is the captured image to be compensated, and the output image on the right is the compensation.
  • the phantom emission has been effectively eliminated.
  • the image to be compensated is shifted and rotated according to the central parameters cx and cy of the phantom reflection compensation model, that is, the rotation can be performed around an axis deviating from the center of the image.
  • each position of the transformed image is multiplied by the corresponding phantom reflection compensation model factor.
  • the model diagram can be aligned with the transformed image and then multiplied to perform intensity scaling.
  • the phantom reflection compensation operation according to the present disclosure may be performed before or after the scattering compensation, and substantially similar advantageous effects can be obtained.
  • Fig. 11(a) shows that the phantom reflection compensation is performed before the scattering compensation, that is, the above-mentioned image to be compensated is the original image obtained by the ToF sensor.
  • 11(b) shows that phantom reflection compensation is performed after scattering compensation, that is to say, the above-mentioned image to be compensated is an image that has been subjected to scattering compensation.
  • the method further includes a scatter compensation step for compensating for scatter in the image.
  • the imaging device according to the present disclosure is an imaging device using a photographing filter.
  • the camera includes a ToF sensor, and the image to be compensated includes a depth image.
  • the foregoing example mainly describes the case where the image to be compensated for a scene is an image.
  • the embodiments of the present invention can also be used in a situation where there are at least two images to be compensated for a scene.
  • the original image data obtained by scene shooting may correspond to at least two sub-images, and the phantom reflection compensation operation according to the present disclosure is performed for each sub-image, including the above-mentioned calculation and compensation steps, thereby obtaining a compensated At least two sub-images.
  • the final compensated image corresponding to the scene can be obtained by combining at least two compensated sub-images.
  • the at least two sub-images include an I image and a Q image to be corresponding to original image data.
  • I and Q images are examples to illustrate the compensation for sub-images.
  • FIG. 12 shows an example of an operation including phantom reflection compensation for I and Q images.
  • iToF sensors In order to be able to measure distance, iToF sensors usually need to capture 4 components. These components are about the phase shift between the emitters (lasers) and the return of light to the sensor in the case of a predefined phase shift. These 4 components are recorded for 0 degree, 90 degree, 180 degree and 270 degree respectively. Obtain these 4 components as ToF raw data.
  • I and Q images can indicate the captured in-phase data, for example, the I image is a combination of the 0-degree component and the 180-degree component
  • Q can indicate the captured quadrature-phase data, for example, the Q image is the 90-degree component and A combination of components for 270 degrees.
  • I and Q images are calculated as follows:
  • the specific compensation method can be performed as described above with reference to FIG. 10, especially for each of the I and Q images, respectively, the above-mentioned phantom reflection compensation operation is performed separately. A detailed description.
  • compensated I and Q images are used to generate confidence images and depth images.
  • the confidence image is calculated based on I and Q, as shown below:
  • abs() indicates an absolute value function, indicating the absolute value of the confidence of each sub-region or pixel in the I image and the Q image, respectively.
  • the confidence map can also be obtained in other ways known in the art, which will not be described in detail here.
  • the depth map can be obtained from at least one of the I and Q images, which can be obtained in a manner known in the art, and will not be described in detail here.
  • I and Q images are only exemplary. Other sub-images are also possible, as long as they can be obtained from the camera and can be combined to obtain the confidence image and the depth map.
  • FIG. 13 illustrates the beneficial effects of phantom reflection compensation according to an embodiment of the present disclosure.
  • the confidence image and the depth map corresponding to the image shown in FIG. 4A are respectively shown.
  • the left part indicates the confidence map and the depth map corresponding to the original scene map from top to bottom, including scattering and phantom reflection
  • the middle part indicates the confidence map and the depth map after compensation processing from top to bottom. It can be seen from this that, because the scattering compensation is mainly performed, even if the scattering noise in the image is removed by the scattering compensation, there are still phantom reflections on the left side of the image.
  • the right part respectively indicates the confidence map and the depth map compensated by the method of the present disclosure from top to bottom. It can be seen that the method of the present disclosure effectively removes ghost reflections in the image, and a high-quality output image can be obtained.
  • the above mainly uses the examples in the iToF sensor to describe the problem of phantom reflection and the operation of phantom reflection compensation.
  • the phantom reflection does not depend on the transmitter or the image sensor, but mainly depends on the components in the camera device that cause light reflection, especially it mainly depends on the photographic filter and/or lens in the camera. use. This means that this phenomenon can be observed on other 3D measurement systems that use light, including (but not limited to):
  • 14A and 14B show phantom reflection compensation when using a direct ToF (dToF) sensor for image shooting.
  • dToF direct ToF
  • dToF concentrates light energy in a short period of time. It includes generating photon packets through short pulses of laser or LED, and directly calculating the propagation time of these photons to reach the target and return. Then, an appropriate technique can be used to accumulate multiple events into a histogram in order to identify the target peak position on the background noise that is usually uniformly distributed.
  • the technology may be a technology called Time Correlated Single Photon Counting (TCSPC).
  • TCSPC Time Correlated Single Photon Counting
  • phantom reflections can appear as "phantom peaks" in the histogram of the affected pixels.
  • the upper histogram corresponds to the histogram of the object with high intensity in the field of view (FoV), where the high peak value indicates its corresponding depth.
  • the histogram in the lower part corresponds to the histogram at the position where phantom reflections appear, in which depth peaks also appear due to the influence of phantom reflections, which may lead to erroneous depth detection.
  • phantom reflection compensation can be performed for dToF.
  • phantom reflection compensation is performed in the manner described above, such as shifting and rotating, multiplying, and subtracting operations to perform phantom reflection compensation, as shown in FIG. 14B. It can be seen from the output histogram on the right that through the phantom reflection compensation of the present disclosure, the histogram corresponding to the phantom reflection is significantly suppressed, and its peak value is much smaller than the peak value of the real object, so that it will not cause erroneous depth detection.
  • the phantom reflection compensation model can also be generated as described above with reference to FIG. 10, except that the input is the pixel histogram obtained from the captured data. Moreover, any other operations described above are equally applicable to dToF.
  • the phantom reflection compensation model mainly corresponds to the component that causes the reflection of the phantom reflection in the photographing device, such as a lens and/or a filter, it is possible to obtain the compensation for this component when using any ToF sensor. After the phantom reflection compensation model is used, the model can be applied to other types of ToF sensors, and even other types of sensors that use this component.
  • FIG. 15 shows phantom reflection compensation when using a spot ToF (spotToF) sensor for image shooting.
  • Figure 15 (a) is a confidence map taken when there is no close object, (b) shows an ideal confidence map taken when there is a close object, in which even if there are close objects in the scene, except for close objects Apart from the point information of the object itself, there will be no other point information.
  • (c) is the confidence map for the presence of phantom reflection, where it can be seen that when there is a close object on the right side of the scene, new points will appear on the left side of the scene. These new points are generated by phantom reflection, and they may produce Wrong depth, or may mix the signal with the current spot.
  • (d) is a confidence map after phantom reflection compensation is performed using the solution of the present disclosure, in which new points caused by phantom reflection are effectively removed, which improves the image quality.
  • the phantom reflection compensation function according to the present disclosure may be used automatically or selected by the user.
  • the phantom compensation function of the present invention can be realized automatically.
  • the phantom compensation function can be associated with a specific shooting mode of the camera, and the phantom compensation function is automatically activated when the shooting mode is turned on during shooting.
  • the phantom compensation function is automatically turned on, while in distant shooting modes, such as landscape modes, the phantom compensation function is not automatically turned on.
  • the camera can also determine whether to automatically turn on the phantom compensation function according to the distance from the subject.
  • the distance to the subject when the distance to the subject is greater than a certain distance threshold, it can be regarded as a long-range shooting without turning on the phantom compensation function, and when the distance to the subject is less than a certain distance threshold, it can be regarded as a close-up shooting without turning on the phantom. Compensation function.
  • the phantom compensation function of the present invention can be set by the user. For example, a prompt will appear on the camera's photographing operation interface to prompt the user whether to enable the phantom compensation function. When the user selects this function, the phantom compensation function can be turned on to perform phantom compensation/elimination when taking pictures. For example, a button that appears on a touch user interface, or a button that can function as a phantom compensation function on a camera.
  • the phantom reflection compensation model according to the present disclosure may be stored in various ways.
  • the model can be cured with the camera device, especially the camera lens that contains the lens and filter, so that even if the camera lens is changed to other equipment, the model can still be used fixedly, without the need for model extraction .
  • the model can be stored in a device that can be connected to a camera for taking pictures, such as a portable electronic device.
  • the technical solution of the present disclosure is particularly suitable for various applications that need to obtain depth information of objects in a shooting scene, such as a camera device that needs to measure depth information.
  • the technical solution of the present disclosure may be suitable for imaging devices that use sensors based on ToF technology, such as iToF, Full-field ToF (Full-field ToF), Spot ToF (spotToF), and the like.
  • the technical solution of the present disclosure may be suitable for a 3D camera device, because the depth/distance information is very important for obtaining a good 3D image.
  • the embodiments of the present disclosure can also be applied to the RGB sensor, especially when there is only one photographic filter in the system, for example, it can be applied
  • the camera is equipped with a cover glass, which is implemented as a filter in this scene.
  • the solution of the present disclosure can be applied to certain specific shooting modes where ghost reflections are likely to occur.
  • the phantom reflection compensation scheme according to the present disclosure is also particularly suitable for the mode related to close-up object shooting of the camera device, such as the close-up shooting mode. , Bokeh mode and so on.
  • the influence of ghost reflection in the image can be effectively eliminated.
  • the depth information of objects in the scene that is, distance information
  • the depth information of objects in the scene can be accurately determined, so that accurate focus can be obtained when taking pictures, or high-quality images can be obtained to facilitate subsequent image-based applications.
  • the solution of the present disclosure can eliminate the wrong depth and obtain an appropriate object distance.
  • an auto-focus application even if the object is close to the camera, it can accurately identify the distance of the object and use a good focusing distance to take pictures.
  • a good focusing distance for example, for facial ID recognition, when a photographed subject is close to a camera (such as a table) for facial recognition, the solution of the present disclosure can effectively remove ghosts in the image, and then obtain a high-quality image for recognition.
  • the technical solution of the present disclosure is particularly applicable to cameras in portable devices, such as cameras in mobile phones, tablet computers, and the like.
  • the lens and/or camera filter of the camera may be fixed or replaceable.
  • FIG. 7 shows a block diagram of an electronic device capable of phantom reflection compensation according to an embodiment of the present disclosure.
  • the electronic device 700 includes a processing circuit 720, which can be configured to weight the image to be compensated including the ghost reflection by using a ghost reflection compensation model; and combine the image to be compensated and the weighted image to eliminate the ghost reflection in the image .
  • the processing circuit 720 may be in the form of a general-purpose processor, or may be a dedicated processing circuit, such as an ASIC.
  • the processing circuit 120 can be constructed by a circuit (hardware) or a central processing device (such as a central processing unit (CPU)).
  • the processing circuit 720 may carry a program (software) for operating the circuit (hardware) or the central processing device.
  • the program can be stored in a memory (such as arranged in a memory) or an external storage medium connected from the outside, and downloaded via a network (such as the Internet).
  • the processing circuit 720 may include various units for realizing the above-mentioned functions, for example, a calculation unit 722 for weighting the image to be compensated including phantom reflection by using a phantom reflection compensation model; and a phantom reflection compensation unit 724, used to combine the image to be compensated and the weighted image to eliminate ghost reflections in the image.
  • the processing circuit 720 may further include a scattering compensation unit 726 and a data path processing unit 728. Each unit can be operated as described above, which will not be described in detail here.
  • the scattering compensation unit 726 and the data path processing unit 728 are drawn with dashed lines to illustrate that the unit is not necessarily included in the processing circuit.
  • the unit can be in the terminal-side electronic device but outside the processing circuit, or even Located outside the electronic device 700. It should be noted that, although each unit is shown as a separate unit in FIG. 7, one or more of these units can also be combined into one unit or split into multiple units.
  • each of the foregoing units may be implemented as an independent physical entity, or may also be implemented by a single entity (for example, a processor (CPU or DSP, etc.), an integrated circuit, etc.).
  • the above-mentioned units are shown with dotted lines in the drawings to indicate that these units may not actually exist, and the operations/functions implemented by them may be implemented by the processing circuit itself.
  • FIG. 7 is only a schematic structural configuration of the terminal-side electronic device, and the electronic device 700 may also include other possible components (e.g., memory, etc.).
  • the terminal-side electronic device 700 may also include other components not shown, such as a memory, a network interface, and a controller.
  • the processing circuit may be associated with the memory.
  • the processing circuit may be directly or indirectly (for example, other components may be connected in between) connected to the memory to access data.
  • the memory may store various information generated by the processing circuit 720.
  • the memory may also be located in the terminal-side electronic device but outside the processing circuit, or even outside the terminal-side electronic device.
  • the memory may be volatile memory and/or non-volatile memory.
  • the memory may include, but is not limited to, random access memory (RAM), dynamic random access memory (DRAM), static random access memory (SRAM), read only memory (ROM), flash memory.
  • FIG. 16 shows a block diagram of an imaging apparatus according to an embodiment of the present disclosure.
  • the camera 1600 includes a compensation device 700, which can be used for image compensation processing, especially phantom reflection compensation, and the compensation device can be implemented by an electronic device, such as the electronic device 700 described above.
  • the camera 1600 may include a lens unit 1602, which may include various optical lenses known in the art for imaging an object on the sensor through optical imaging.
  • the imaging device 1600 may include a photographic filter 1604, which may include various photographic filters known in the art, which may be mounted to the front of the lens.
  • the camera 1600 may further include a processing circuit 1606, which may be used to process the obtained image.
  • a processing circuit 1606 which may be used to process the obtained image.
  • the compensated image may be further processed, or the image to be compensated may be preprocessed.
  • the camera 1600 may also include various image sensors, such as the aforementioned various sensors based on the ToF technology. However, these sensors may also be located outside the camera 1600.
  • the photographic filter and the processing circuit are drawn with dashed lines to illustrate that the unit is not necessarily included in the imaging device 1600, and can even be connected and/or connected in a known manner outside the imaging device 1600. Communication. It should be noted that although each unit is shown as a separate unit in FIG. 16, one or more of these units can also be combined into one unit or split into multiple units.
  • the processing circuit 1606 may be in the form of a general-purpose processor, or may be a special-purpose processing circuit, such as an ASIC.
  • the processing circuit 1606 can be constructed by a circuit (hardware) or a central processing device (such as a central processing unit (CPU)).
  • the processing circuit 1606 may carry a program (software) for operating the circuit (hardware) or the central processing device.
  • the program can be stored in a memory (such as arranged in a memory) or an external storage medium connected from the outside, and downloaded via a network (such as the Internet).
  • the technology of the present disclosure can be applied to various products.
  • the technology of the present disclosure can be applied to the camera device itself, such as being built into the camera lens and integrated with the camera lens.
  • the technology of the present disclosure can be executed by the processor of the camera device in the form of a software program, or Integrated in the form of an integrated circuit and a processor; or used in a device connected to a camera device, such as a portable mobile device equipped with the camera device.
  • the technology of the present disclosure can be used in the form of a software program for the camera It is executed by the processor of the device, or integrated in the form of an integrated circuit and a processor, or even integrated in an existing processing circuit, for phantom reflection compensation during the photographing process.
  • the technology of the present disclosure can be applied to various camera devices, such as a lens mounted on a portable device, a camera device on an unmanned aerial vehicle, a camera device in a monitoring device, etc., and so on.
  • the invention can be used in many applications.
  • the present invention can be used to monitor, identify, and track objects in still images or moving videos captured by a camera, and is particularly advantageous for portable devices equipped with cameras, (camera-based) mobile phones, and the like.
  • FIG. 17 is a block diagram showing an example structure of a personal computer of an information processing apparatus that can be employed in an embodiment of the present disclosure.
  • the personal computer may correspond to the above-mentioned exemplary transmitting device or terminal-side electronic device according to the present disclosure.
  • a central processing unit (CPU) 1301 executes various processes in accordance with a program stored in a read only memory (ROM) 1302 or a program loaded from a storage portion 1308 to a random access memory (RAM) 1303.
  • the RAM 1303 also stores data required when the CPU 1301 executes various processes and the like as necessary.
  • the CPU 1301, the ROM 1302, and the RAM 1303 are connected to each other via a bus 1304.
  • the input/output interface 1305 is also connected to the bus 1304.
  • input part 1306 including keyboard, mouse, etc.
  • output part 1307 including display, such as cathode ray tube (CRT), liquid crystal display (LCD), etc., and speakers, etc.
  • storage part 1308 Including hard disks, etc.
  • communication part 1309 including network interface cards such as LAN cards, modems, etc.
  • the communication section 1309 performs communication processing via a network such as the Internet.
  • the driver 1310 is also connected to the input/output interface 1305 as required.
  • Removable media 1311 such as magnetic disks, optical disks, magneto-optical disks, semiconductor memories, etc. are mounted on the drive 1310 as required, so that the computer programs read from them are installed in the storage section 1308 as required.
  • a program constituting the software is installed from a network such as the Internet or a storage medium such as a removable medium 1311.
  • this storage medium is not limited to the removable medium 1311 shown in FIG. 17 where the program is stored and distributed separately from the device to provide the program to the user.
  • removable media 1311 include magnetic disks (including floppy disks (registered trademarks)), optical disks (including compact disk read-only memory (CD-ROM) and digital versatile disks (DVD)), magneto-optical disks (including mini disks (MD) (registered trademarks) )) and semiconductor memory.
  • the storage medium may be a ROM 1302, a hard disk included in the storage portion 1308, etc., in which programs are stored and distributed to users together with the devices containing them.
  • the method and system of the present invention can be implemented through software, hardware, firmware, or any combination thereof.
  • the order of the steps of the method described above is only illustrative, and unless specifically stated otherwise, the steps of the method of the present invention are not limited to the order specifically described above.
  • the present invention may also be embodied as a program recorded in a recording medium, including machine-readable instructions for implementing the method according to the present invention. Therefore, the present invention also covers a recording medium storing a program for implementing the method according to the present invention.
  • Such storage media may include, but are not limited to, floppy disks, optical disks, magneto-optical disks, memory cards, memory sticks, and so on.
  • embodiments of the present disclosure may also include the following illustrative example (EE).
  • An electronic device for compensating for phantom reflection in an image captured by a camera device including a processing circuit, configured to:
  • the image to be compensated and the weighted image are combined to eliminate ghost reflections in the image.
  • EE2 The electronic device according to EE1, wherein the phantom reflection compensation model is trained from a predetermined number of calibration images, and the phantom reflection compensation model is trained so that the phantom reflection compensation model is calibrated after applying the phantom reflection compensation model.
  • the intensity variation of the ghost reflection area in the image compared to the adjacent area of the ghost reflection area is smaller than a certain threshold or minimum.
  • the pixel intensity at the corresponding position in the image obtained by multiplying the calibration image and the model is subtracted to obtain the intensity variation.
  • EE4 The electronic device according to EE 1, wherein the phantom reflection compensation model includes a phantom reflection factor corresponding to each sub-region in the image, wherein the sub-region includes at least one pixel.
  • EE5 The electronic device according to EE 4, wherein the phantom reflection compensation model is set such that the phantom reflection factor at the sub-region near the center of the image is greater than the phantom reflection factor at the sub-region near the edge of the image.
  • EE6 The electronic device according to EE 4, wherein the phantom reflection factor is determined based on a Gaussian distribution.
  • EE7 The electronic device according to any one of EE 4-6, wherein the processing circuit is configured to:
  • the corresponding phantom reflection intensity factor in the phantom reflection compensation model is used to perform intensity scaling, so as to obtain an intensity-scaled image as a weighted image.
  • EE8 The electronic device according to EE 1, wherein the phantom reflection model is related to the characteristics of the components in the photographing device that cause light reflection that causes phantom reflection when taking pictures, and wherein the parameters of the phantom reflection model Depends on the characteristics of the component.
  • EE9 The electronic device according to EE 8, wherein the component includes at least one of a lens and a photographic filter.
  • the electronic device according to EE 8 or 9, wherein the model of the phantom reflection model includes at least parameters related to the center offset and related parameters for determining the Gaussian distribution of the phantom reflection factor.
  • EE11 The electronic device according to EE 1, wherein the phantom reflection model includes parameters related to center offset, and wherein the processing circuit is configured to:
  • the phantom reflection compensation model is used to weight the shifted and rotated image to obtain a weighted image.
  • the compensated image is obtained by subtracting the pixel intensities at corresponding positions in the image to be compensated and the weighted image.
  • EE15 The electronic device according to EE 14, wherein the at least two sub-images include an I image and a Q image obtained by shooting original image data.
  • EE16 The electronic equipment according to EE 1, wherein the imaging device is an optical imaging device using a photographic filter.
  • EE17 The electronic device according to any one of EE 1-16, wherein the camera device includes a ToF sensor, and the image includes a depth image.
  • a method for compensating for phantom reflections in images taken by a camera device including the following steps:
  • the compensation step is used to combine the image to be compensated and the weighted image to eliminate ghost reflections in the image.
  • EE19 The method according to EE 18, wherein the phantom reflection compensation model is trained from a predetermined number of calibration images, and the phantom reflection compensation model is trained so that the phantom reflection compensation model is calibrated after applying the phantom reflection compensation model.
  • the intensity variation of the ghost reflection area in the image compared to the adjacent area of the ghost reflection area is smaller than a certain threshold or minimum.
  • the pixel intensity at the corresponding position in the image obtained by multiplying the calibration image and the model is subtracted to obtain the intensity variation.
  • phantom reflection compensation model includes a phantom reflection factor corresponding to each sub-region in the image, wherein the sub-region includes at least one pixel.
  • EE22 The method according to EE 20, wherein the phantom reflection compensation model is set such that the phantom reflection factor at the sub-region near the center of the image is greater than the phantom reflection factor at the sub-region near the edge of the image.
  • EE23 The method according to EE 20, wherein the phantom reflection factor is determined based on a Gaussian distribution.
  • EE24 The method according to any one of EE 20-22, wherein the calculation step further includes:
  • the corresponding phantom reflection intensity factor in the phantom reflection compensation model is used to perform intensity scaling, so as to obtain an intensity-scaled image as a weighted image.
  • EE25 The method according to EE 18, wherein the phantom reflection model is related to the characteristics of the components in the photographing device that cause light reflection that causes phantom reflection when taking pictures, and wherein the parameters of the phantom reflection model depend on Because of the characteristics of the part.
  • EE26 The method according to EE 24, wherein the component includes at least one of a lens and a photographic filter.
  • EE27 The method according to EE 24 or 25, wherein the model of the phantom reflection model includes at least parameters related to the center offset and related parameters for determining the Gaussian distribution of the phantom reflection factor.
  • EE28 The method according to EE 18, wherein the phantom reflection model includes parameters related to center offset, and wherein the calculation step further includes:
  • the phantom reflection compensation model is used to weight the shifted and rotated image to obtain a weighted image.
  • the compensated image is obtained by subtracting the pixel intensities at corresponding positions in the image to be compensated and the weighted image.
  • EE32 The method according to EE 30, wherein the at least two sub-images include an I image and a Q image obtained by shooting original image data.
  • EE33 The method according to EE 18, wherein the imaging device is an optical imaging device using a photographic filter.
  • EE34 The method according to any one of EE 18-33, wherein the camera device includes a ToF sensor, and the image includes a depth image.
  • An electronic device for phantom reflection compensation for image capture using a direct time-of-flight (dToF) sensor including a processing circuit, configured to:
  • EE36 The electronic device according to EE 35, wherein the phantom reflection model includes parameters related to center offset, and wherein the processing circuit is configured to:
  • the phantom reflection compensation model is used to weight the shifted and rotated histogram to obtain a weighted histogram.
  • EE38 The electronic device according to EE 35, wherein the processing circuit is configured to:
  • the compensated histogram is obtained by subtracting the value at the corresponding position in the histogram to be compensated and the weighted histogram.
  • a method for phantom reflection compensation for image capture using a direct time-of-flight (dToF) sensor including:
  • the calculation step is to weight the pixel histogram to be compensated containing the phantom reflection obtained from the photographed raw data by using the phantom reflection compensation model;
  • the pixel histogram to be compensated and the weighted pixel histogram are combined to eliminate ghost reflections.
  • EE40 The method according to EE 39, wherein the phantom reflection model includes parameters related to center offset, and wherein the calculation step further includes:
  • the phantom reflection compensation model is used to weight the shifted and rotated histogram to obtain a weighted histogram.
  • the compensation step further comprises: subtracting values at corresponding positions in the histogram to be compensated and the weighted histogram to obtain a compensated histogram.
  • a device comprising
  • At least one processor At least one processor
  • At least one storage device stores instructions thereon, and when the instruction is executed by the at least one processor, the at least one processor executes any one of EE 18-34 and 39-42 The method described in the item.
  • EE44 A storage medium storing instructions that, when executed by a processor, can cause the method according to any one of EE18-34 and 39-42 to be executed.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

一种用于补偿通过摄像装置拍摄的图像中的幻影反射的电子设备,包括处理电路,被配置为:通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权,其中所述幻影反射补偿模型与进行拍摄时所述摄像装置中的光反射导致的幻影反射在图像中的强度分布有关;以及组合待补偿图像和经加权的图像以消除图像中的幻影反射。

Description

幻影反射补偿方法及设备 技术领域
本公开涉及图像处理,特别涉及图像补偿处理。
背景技术
近年来,静态图像或一系列运动图像(诸如视频)中的对象检测/识别/比对/跟踪被普遍地和重要地应用于图像处理、计算机视觉和图案识别领域,并且在其中起到重要作用。对象可以是人的身体部位,诸如脸部、手部、身体等,其它生物或者植物,或者任何其它希望检测的物体。对象识别是最重要的计算机视觉任务之一,其目标是根据输入的照片/视频来识别或验证特定的对象,继而准确地获知对象的相关信息。特别地,在一些应用场景中,在基于通过摄像装置拍摄的对象图像进行对象识别时,需要能够从图像中准确识别对象的细节信息,继而准确地识别对象。
然而,当前摄像装置所获得的图像中往往会包含各种噪声,而噪声的存在使得图像质量变差,可能会导致得到不准确甚至是错误的细节信息,继而会影响对象的成像和识别。
因此,需要改进的技术来改进图像处理以进一步抑制噪声。
除非另有说明,否则不应假定本节中描述的任何方法仅仅因为包含在本节中而成为现有技术。同样,除非另有说明,否则关于一种或多种方法所认识出的问题不应在本节的基础上假定在任何现有技术中都认识到。
发明内容
本公开的一个目的是改进图像处理以进一步抑制图像中的噪声,尤其是与幻影反射有关的噪声,继而提高图像质量。
特别地,拍摄的图像可能会存在幻影,继而导致图像质量差。本公开能够利用幻影反射补偿模型对图像进行补偿,有效地去除图像中的幻影,获得高质量的图像。
在一个方面,提供了一种用于补偿通过摄像装置拍摄的图像中的幻影反射的电子设备,包括处理电路,被配置为:通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权,其中所述幻影反射补偿模型与进行拍摄时所述摄像装置中的光反射导致的幻影反射在图像中的强度分布有关;以及组合待补偿图像和经加权的图像以消 除图像中的幻影反射。
在另一方面,提供了一种用于补偿通过摄像装置拍摄的图像中的幻影反射的方法,包括以下步骤:计算步骤,用于通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权,其中所述幻影反射补偿模型与进行拍摄时所述摄像装置中的光反射导致的幻影反射在图像中的强度分布有关;以及补偿步骤,用于组合待补偿图像和经加权的图像以消除图像中的幻影反射。
在还另一方面,提供了一种包括至少一个处理器和至少一个存储设备,所述至少一个存储设备其上存储有指令,该指令在由所述至少一个处理器执行时可使得所述至少一个处理器执行如本文所述的方法。
在仍另一方面,提供了一种存储有指令的存储介质,该指令在由处理器执行时可以使得执行如本文所述的方法。
从参照附图的示例性实施例的以下描述,本发明的其它特征将变得清晰。
附图说明
并入说明书中并且构成说明书的一部分的附图示出了本发明的实施例,并且与描述一起用于解释本发明的原理。在附图中,相似的附图标记指示相似的项目。
图1示出了ToF技术的概述图。
图2A示出了近距物体拍摄中的近距物体导致的光反射。
图2B示出了照相滤光器导致的光反射。
图3示出了图像中的幻影现象的示意图。
图4A到4C示出了置信度图像和深度图像中的幻影反射的示例。
图5A示出了本公开方案中的包含散射补偿的图像处理流程。
图5B示出了本公开方案中的示例性的散射补偿操作。
图5C示出了本公开方案中的散射补偿的结果。
图6示出了根据本公开的实施例的幻影反射补偿方法的流程图。
图7示出了根据本公开的实施例的能够进行幻影反射补偿的电子设备的框图。
图8示出了根据本公开的实施例的幻影反射补偿模型的图示。
图9示出了根据本公开的实施例的幻影反射补偿模型的提取。
图10A示出了根据本公开的实施例的由校准用图像提取幻影反射补偿模型的示例性基本流程,图10B和10C示出了根据本公开的实施例的示例性图像旋转操作的示意图。
图11示出了包含根据本公开的实施例的幻影反射补偿的图像处理流程。
图12示出了包含根据本公开的实施例的幻影反射补偿的图像处理流程。
图13示出了根据本公开的实施例的幻影反射补偿的执行结果。
图14A和14B示出了根据本公开的实施例的对于dToF传感器的幻影反射补偿。
图15示出了根据本公开的实施例的对于点ToF的幻影反射补偿。
图16示出了根据本公开的实施例的拍摄装置。
图17示出了示出了能够实现本发明的实施例的计算机系统的示例性硬件配置的框图。
虽然在本公开内容中所描述的实施例可能易于有各种修改和另选形式,但是其具体实施例在附图中作为例子示出并且在本文中被详细描述。但是,应当理解,附图以及对其的详细描述不是要将实施例限定到所公开的特定形式,而是相反,目的是要涵盖属于权利要求的精神和范围内的所有修改、等同和另选方案。
具体实施方式
在下文中将结合附图对本公开的示范性实施例进行描述。为了清楚和简明起见,在说明书中并未描述实施例的所有特征。然而,应该了解,在对实施例进行实施的过程中必须做出很多特定于实施方式的设置,以便实现开发人员的具体目标,例如,符合与装置及业务相关的那些限制条件,并且这些限制条件可能会随着实施方式的不同而有所改变。此外,还应该了解,虽然开发工作有可能是非常复杂和费时的,但对得益于本公开内容的本领域技术人员来说,这种开发工作仅仅是例行的任务。
在此,还应当注意,为了避免因不必要的细节而模糊了本公开,在附图中仅仅示出了与至少根据本公开的方案密切相关的处理步骤和/或设备结构,而省略了与本公开关系不大的其他细节。
以下将参照附图来详细描述本发明的实施例。应注意,在附图中相似的附图标记和字母指示相似的项目,并且因此一旦一个项目在一个附图中被定义,则对于随后的附图无需再对其进行论述。
在本公开中,术语“第一”、“第二”等仅仅用于区分元件或者步骤,而不是要指示时间顺序、优先选择或者重要性。
在本公开的上下文中,图像可指的是多种图像中的任一种,诸如彩色图像、灰度图像等。应指出,在本说明书的上下文中,图像的类型未被具体限制,只要这样的图 像可经受处理以便进行信息提取或检测即可。此外,图像可以是原始图像或者该图像的经处理的版本,诸如在对图像执行本申请的操作之前已经经受了初步的过滤或者预处理的图像的版本。
在通过摄像装置拍摄场景时,得到的图像中通常会存在噪声,例如可能包括散射和幻影反射(ghostreflection)现象。虽然在一些情况下这些噪声现象可以增加拍摄图像的艺术效果,例如通过RGB传感器拍摄的一些风景照等的情况,但是在很多情况下,相比于RGB传感器而言,这种噪声对于使用光来测量距离的所有传感器(例如基于ToF技术的传感器,用于3D测量的结构光传感器等等)特别有害。以下将结合附图简单描述飞行时间(ToF,Timeto Fight)技术以及ToF传感器拍照时所出现的噪声问题。需注意,这些噪声问题对于基于其他技术的图像传感器,例如结构光传感器以及RGB传感器也因相同的原理而存在,简洁起见本公开不再分别描述。
在飞行时间技术中,使用光发射器来照亮场景,并且测量光返回到传感器所花费的时间,即光从发射到接收的时间差,从而基于所测量的时间可计算距场景的距离,距离d=ct/2,其中c是光速,t是所测量的时间,如图1所示。可例如通过脉冲(直接飞行时间)或连续波(间接飞行时间)来完成光发射。然而,对于采用基于ToF的传感器的相机,在进行拍照时会出现噪声现象,例如可能包括散射和幻影反射现象。这样的噪声现象可能是由于相机中的光反射导致的。
特别地,在使用相机对于近距物体进行拍摄时,这个接近的物体将向着相机返回大量的活动光,对于相机而言就类似于一个明亮光源,从而在相机中产生大量的光反射而导致散射。以下将参照附图对此进行描述。如图2A所示,通过摄像模块对包含三个物体的场景进行拍照,该摄像模块包括透镜和传感器,也就是成像器,物体1,2和3距成像器的距离分别为r(1),r(2)和r(3),朝这三个物体发射的光被物体反射并返回到摄像模块中成像器的相应位置,即成像位置S(1),S(2)和S(3)。物体1非常靠近摄像模块,使得反射光强度高,会在模块内部(例如透镜和成像器件之间)反弹。在拍摄的图像中,来自物体1的信号将在其的位置周围散射,并将与来自物体2和3的信号混合。ToF传感器会将这些信号合并,而对于物体2和3提供错误的深度(测量深度在距离r(1)和距离r(2)或r(3)之间)。
此外,在相机中往往会在透镜之前设置照相滤光器(photographic filter),这样可能导致产生幻影反射。如图2B所示,通常情况下,光线会经由滤光器和透镜入射到传感器上的成像点,如实线及其上的箭头所示,但是部分光线会从成像点朝向 透镜反射并且透射通过透镜,如反向箭头所示。此时,由于设置有照相滤光器,信号会被滤光器朝着透镜反射,继而经由透镜入射到传感器上,如虚线箭头所示,就如同来自不同方向的光在传感器上成像,从而在除成像点之外另外产生作为幻影图像。
尽管此现象在RGB传感器上也可见,例如当拍摄近距物体或者明亮物体时,会在其附近或者中心对称位置处出现幻影图像,如图3中圆圈部分所指示的,相比于明亮白色斑块的浅色图像,但是ToF传感器会检测到该幻影,这意味着在传感器前面检测到错误的深度。以下将结合附图来说明相机拍摄的图片中的幻影反射现象带来的影响。
图4A示出了手机拍摄的场景图的RGB图像,拍摄模式是背景虚化模式(bokehmode),积分时间是300μs。其中在图像右侧存在近距物体。图4B示出了该场景图的置信度图(confidenceimage),该置信度图指示场景图中的深度信息的置信度,特别地,置信度图中的各像素指示场景图中的各像素提供的深度的置信度。从中可见,右侧指示近距对象,其由于近距离而显示为明亮的白色,由于如上所述的近距物体的影响,在图像中间部分反映出散射效果(靠近白色的杂乱的白点),而在图像左侧反映出幻影反射(暗色背景上的杂乱白色区域)。图4C示出了该场景图的深度图(depthimage),该深度图指示场景中的物体的深度信息,其中深度图中的各像素指示相机到场景中的物体的距离。其中可见,在深度图的左侧会出现幻影反射导致的类似于物体的灰色阴影部分,其往往会被误认为提供深度信息。由上可见,在拍摄图像中,散射部分是位于物体图像和幻影反射部分之间的。由于幻影反射的存在,将在图像的左侧提供该近距物体的错误的深度信息,深度通常非常浅,从而导致无法正确识别近距物体的信息,尤其是深度信息。
由上可见,在通过使用包含进行光测距的传感器的相机系统(特别是包含ToF传感器等的相机系统)来拍摄场景的情况下,这种幻影反射现象是非常有害的,会导致检测到错误的深度,而错误的深度信息对于提供高质量图像以及后续的许多应用来说都会造成不利影响。但是,当前的技术中,对于拍摄图像的处理没有特别针对幻影反射进行补偿,从而无法有效地消除幻影反射以获得正确的物体细节信息,特别是深度信息。
图5A示出了本公开提出的图像处理中的散射补偿流程,其中对于ToF原始数据进行散射补偿,然后对于经散射补偿的数据进行后续数据处理,以得到置信度图像和深度图像。该后续数据处理可包括本领域已知的用于生成置信度图像和深度图像的处理,这里将不再详细描述。
如前所述,散射效果可能是由于摄像装置中来自近距物体的光在传感器和透镜之间反射而导致的。这样会在物体周围产生一些模糊,因此图像的边缘不太清晰。可以通过能够描述由点或像素生成的这种模糊的特性的适当函数来进行建模,该函数可以是例如PSF函数(点扩展函数)。然后根据建模结果,可以应用算法来消除由图像中所有点/像素生成的特定模糊。该算法可以是例如反卷积算法。应指出,散射的建模和补偿可以采用本领域已知的其他合适的函数和算法来进行,这里将不再详细描述。
图5B示出了根据本公开实施例的示例性散射补偿操作。对于在场景中心存在明亮的白色卡作为拍照对象时,该对象周围会出现明显的模糊,而且还可能整个图像中产生模糊,如左侧图像所示。对于此,通过利用与建模函数(例如,PSF函数)相对应(例如,逆变换)的反卷积算法将消除此散射,从而使得白色斑块的边缘会更清晰,图像中的模糊被消除,散射效果得到补偿,如右侧图像所示。
但是,散射补偿并不能有效的消除幻影反射。图5C示出了散射补偿的结果,该图像包含对应于如图4A所示的RGB图像的置信度图和深度图。(a)中自上而下分别指示原始场景图所对应的置信度图和深度图,其中包含了散射和幻影反射,(b)中自上而下分别指示经散射补偿处理后的置信度图和深度图,从中可见,即使通过散射补偿去除了图像中的散射噪声,图像左侧中仍然存在幻影反射(阴影部分),而该幻影依然会导致错误的深度测量。
因此,本公开的一个目标是能够有效地消除幻影反射。特别地,本公开提出了利用提取到的幻影反射补偿模型来对待处理的数据/图像进行加权,并且通过利用加权后的数据/图像对待处理的数据/图像补偿,从而有效地消除幻影反射。
如上所述,这种幻影反射的现象特别地是由于相机系统中滤光器所造成的反射而导致的,因此对于附加地使用滤光器的相机系统而言,根据本公开的幻影反射补偿技术是尤其有利的,而不管传感器的类型如何,是ToF传感器,结构光传感器,还是RGB传感器,或者其它类型的传感器。
以下将结合附图详细描述根据本公开的实施例。
图6示出了根据本公开的实施例的用于补偿通过摄像装置拍摄的图像中的幻影反射的方法的流程图。该方法600可包括计算步骤S601,用于通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权;以及补偿步骤S602,用于组合待补偿图像和经加权的图像以消除图像中的幻影反射。
应指出,可以应用本公开的技术方案的摄像装置可包含多种类型的光学摄像装置, 只要在该摄像装置拍摄图像时可能会由于光反射而导致产生幻影反射即可。作为示例,该摄像装置可包含前述的使用了照相滤光器的相机。作为示例,该摄像装置可包含前述的用于3D成像的相机。作为示例,该摄像装置可包含前述的包含基于ToF技术的传感器的相机。作为示例,该摄像装置还可对应于前述的用于近景拍摄的相机。诸如此类。
应指出,待补偿图像可以是任何适当的图像,例如由摄像装置获得的原始图像,或者已对原始图像进行过特定处理的图像,例如初步过滤,去混叠,颜色调整,对比度调整,规范化等等。应指出,预处理操作还可以包括本领域已知的其它类型的预处理操作,这里将不再详细描述。
根据本公开的实施例,幻影反射补偿模型实质上反映出的是摄像装置中发生的导致幻影反射的光反射的特性,也即是基于对于导致幻影反射的光反射进行建模而得到的模型。例如,如上所述该光反射可能由照相滤光器和/或透镜导致,即光反射特性与照相滤光器和/或透镜的特性相对应,因此该幻影反射补偿模型实质上是基于对照相滤光器和/或透镜的特性建模而得到的模型。应指出,该模型并不仅限于此,当摄像装置中存在其他部件可能造成导致幻影反射的光反射,甚至存在其他导致幻影反射的光学现象时,该模型同样等同于基于这样的其它部件或者其它光学现象的特性而建模。
根据一个实施例,幻影反射补偿模型可以与图像中的幻影反射强度分布有关。例如,幻影反射补偿模型可以与受幻影反射影响的图像中的强度分布有关,诸如整个图像的强度分布,或者尤其是源物体和幻影位置处的强度分布。
根据一个实施例,幻影反射补偿模型可以指示该图像中特定子区域处的幻影反射强度因子,其中,所述子区域包含至少一个像素。作为示例,该特定子区域可以是覆盖整个图像的各个子区域。作为另一个示例,该特定子区域是图像中的对应于源物体以及幻影反射位置的子区域。
根据本公开的实施例,该幻影反射强度因子可以基于图像中的强度分布得出,优选地基于图像中的幻影反射的强度分布得出,并且可指的是被设定为使得最小化补偿后的场景图像中的变异(variation)、尤其是最小化幻影反射位置处图像的强度与其相邻区域处图像的强度变异以用于去除幻影反射的因子。作为示例,该变异可指的是强度变异,例如幻影反射区域和该幻影反射周边相邻区域相比的图像强度变异。在此情况下该因子可被称为幻影反射补偿因子。
根据本公开的实施例,幻影反射补偿模型可以用各种形式来表示。以下将结合附图来示例性地描述幻影反射补偿模型。图8示出了根据本公开的一个实施例的示例性幻影反射补偿模型,其中(a)示出了该模型的平面化表示,(b)示出了该模型的三维图形表示,其中横轴和纵轴指示模型的平面大小,其对应于图像的尺寸,竖轴指示模型的幻影反射强度因子的数值。
根据实施例,该模型可包含强度因子(intensity factor)、中心偏移(central shift)、尺寸等有关的各项参数。特别地,这些参数应被设定为使得模型尽可能地匹配拍摄装置中的部件的特性,如上所述。
该中心偏移相关的参数可包括参数cx和cy,它们指示在补偿操作中进行图像变换操作(例如,包括旋转和移位)的基准位置相对于图像中心的偏移,例如分别在水平和垂直方向上。特别地,通过参数cx和cy指示的位置实际上对应于要进行图像旋转的中心轴线,从而意味着图像将进行偏心旋转。作为一个示例,cx和cy可直接指示在后续处理中用于旋转图像的中心轴线相对于图像中心的偏移量,使得在进行旋转之前可以根据该偏移量将该中心轴线移动到图像中心。作为另一示例,cx和cy可对应于该模型图示的中心点位置,如图8中所示的平面化表示的中心点位置,这样在进行旋转之前可以将中心轴线从该位置移动到图像中心。根据实施例,cx和cy至少可依赖于透镜的特性,当然还可能与其它部件的特性有关。确定该中心偏移的目的是对图像进行适当的定位,使得旋转后的图像中的幻影能够和原图像中的目标对齐,具体地,可以例如通过实验或校准测量确定cx和cy的取值。
尺寸相关的参数可包括参数width和length,分别对应于图像的宽度和长度(以像素为单位)。考虑到为了便于应用,模型的图示应该与图像相对应,因此该宽度和高度也可分别指示模型图示的宽度和长度,如图8(b)中的三维图示中的平面所示。width和length可依赖于传感器的像素布置。
强度因子相关的参数可包括指示与图像相对应的强度因子分布的参数。强度因子分布可通过适当的分布函数来表达以指示例如图像的各个像素位置处的幻影反射强度因子,如图8(b)直观可见。
根据实施例,强度因子分布被设置为使得靠近图像中心的子区域处的幻影反射强度因子高于靠近图像边缘的子区域处的幻影反射强度因子。在拍摄图像中存在幻影反射的情况下,幻影反射根据出现位置的不同而会呈现出不同的光强度,特别地从中心向边缘逐渐变弱,而对于深度测量造成的不利影响也会随着强度变小而逐渐变小。因 此,通过如上所述地设置强度因子,可以对图像中的幻影反射造成适当地削弱甚至消除,例如幻影反射的强度越大,则通过更大的因子对其进行削弱和消除,从而提供准确且有效的幻影反射补偿。
根据实施例,强度因子分布可以按照特定分布函数被确定。根据实施例,可以包含至少一个特定分布函数,并且每个函数可以具有对应的权重。作为示例,强度因子=αf(1)+βf(2)+…,其中f(1),f(2)分别指示特定函数,而α,β分别指示各函数的权重。根据实施例,分布函数的参数以及用于分布函数的权重可以例如根据拍摄装置中造成幻影反射的光反射的特性,尤其是引起该反射的部件的光学特性来设定,以便尽可能地匹配(逼近)该特性。例如,可以根据预先测试或者实验中获得的经验值确定相应参数的取值,也可以通过进一步的校准操作在经验值的基础上进行调整。
优选地,强度因子分布遵循高斯分布。使用2个参数计算高斯函数,第一个参数是std,即高斯函数的标准偏差,mu是高斯函数的平均值。应指出这两个高斯函数参数std,mu可以与拍摄装置中的造成幻影反射的反射光强度有关。特别地,参数std,mu可依赖于拍摄装置中可能引发光反射的部件的特性(例如,光学特性),例如上述的透镜、照相滤光器等的特性。根据实施例,强度因子的表达式可包含特定数量的高斯函数,各高斯函数可被施加相应的权重。
作为一个示例,该模型例如可如下地表示:
中心偏移:
cx=-2.5,cy=-2.5,width=240,length=180
强度因子=
1.3*Gaussian(std=38,mu=0)+0.6*Gaussian(std=50,mu=60)
应指出,强度因子表达式中所给出的高斯函数的参数,尤其是高斯函数本身的参数以及高斯函数的权重,可依赖于前述光学部件,例如滤光器、透镜,的光学特性曲线而选择,例如使得强度因子分布更好地对应于(例如,反向匹配)光学特性曲线以便消除该光学特性造成的光反射的影响。或者,该参数可根据经验设定初值,然后通过进一步的校准操作在经验值的基础上进行调整。
应指出,上述幻影反射补偿模型中强度因子使用高斯函数表达仅是示例性的,在本公开中还可以采用其他分布函数来表示,只要该分布函数能够使得模型准确地匹配图像中的幻影反射的强度分布,尤其是匹配反映摄像装置中导致幻影反射的光反射的 特性,例如是引发导致幻影反射的光反射的部件的特性即可。作为示例,分布函数可以采用具有正态分布的其它函数。作为另一示例,可以采用具有其它分布的函数,例如柯西分布,伽马分布等。
根据一个实施例,该幻影反射补偿模型是从预定数量的校准用图像中提取出的。作为示例,校准用图像是通过拍摄装置拍摄用于进行校准的特定场景而获得的。该预定数量可通过经验指定,或者采用先前校准中所采用的数量。
图9示出了模型是由多个图像提取出的。左侧指示用于从中提取幻影反射补偿模型的图像,它们是针对具有白色测试卡(whitechart)的校准场景而获得的图像,各场景中的白色测试卡处于不同的位置,分别位于四个拐角位置和中心位置,每个图像中同时包含指示白色测试卡的亮白色斑块和指示幻影反射的淡色斑块。应指出,校准用图像的数量和布置不限于此,只要能够适当地反映出幻影反射的信息即可。例如,白色测试卡可以布置在更多的位置,获得更多的校准用图像,以便更详细地反映出场景中的幻影反射的信息。
根据本公开的实施例,可以采用各种方式来从校准用图像提取幻影反射补偿模型。根据一个实施例,幻影反射补偿模型可被确定为使得特定数量的校准用图像在应用该模型后的强度变异满足特定要求,如前所述,所述强度变异可以指的是幻影反射区域和该幻影反射周边相邻区域相比的图像强度变异,即幻影反射区域与该幻影反射周边相邻区域的图像强度差异。根据另一个实施例,可选地或者附加地,幻影反射补偿模型还可被确定为使得应用模型后的图像中的幻影反射得以消除或缓解,幻影反射的消除或缓解可以指的是幻影反射所在位置处测得的深度/RGB信息和真实场景一致或接近。也就是说,以强度变异(以及可选地或者附加地,幻影反射消解程度)满足特定要求为条件来提取幻影反射补偿模型。
根据一个实施例,强度变异满足要求可指的是全部特定数量的场景图像或者其中至少一些场景图像所获得的变异的统计值,例如这些场景图像的变异之和,平均值等等,满足要求。作为示例,特定要求可指的是该强度变异小于特定阈值,或者特定要求可指的是图像的变异最小。由此,满足该特定要求意味着幻影反射区域与该幻影反射周边相邻区域强度基本一致,变异小,平滑,没有分界,这样可基本消除幻影反射的影响。
模型提取过程可通过各种方式来执行。根据实施例,可以采用迭代的方式来执行。作为示例,可以设定幻影反射补偿模型的各个参数的初值,利用设定的模型来计算上 述的图像强度变异(以及可选地或者附加地,幻影反射消解程度),并且验证图像强度变异(以及可选地或者附加地,幻影反射消解程度)是否满足特定要求。如果不满足则继续调整参数的设定值,进行下一次操作,直到强度变异满足特定要求,并且确定此时对应的模型为希望的幻影反射补偿模型,以用于后续的图像补偿处理。应指出,可以通过迭代操作被确定的模型参数可以至少包括模型的分布函数的相关参数,例如高斯函数自身的参数以及在存在两个或更多个高斯函数的情况下的各高斯函数的权重。
以下将参照图10A来描述一次模型提取操作中一个校准图像的强度变异的确定过程。其中,该校准图像包含白色测试卡及其幻影反射,可以作为用于提取模型的图像。应指出,这样的确定操作流程可以分别针对每个用于模型提取的图像来执行。
首先,根据幻影反射补偿模型的中心偏移参数来进行图像变换。如前所述中心偏移参数指示了旋转中心相对于图像中心的偏移量,因此图像变换实质上指示图像进行偏心旋转,即围绕偏离图像中心的中心轴线进行旋转。图10B所示直接旋转180度的情况,其中交叉符号的位置对应于由中心偏移指示的偏心轴线位置,图像变换可指的是直接围绕该偏心轴线进行旋转,以得到最终图像。
图像变换还可以通过移位和旋转操作来进行,即移位、旋转、再移位的操作过程。如图10C所示,首先将旋转中心依据该参数值进行移位(例如,根据cx和cy移动到中心),然后围绕中心处的中心轴线进行旋转,然后根据参数值将移位后的中心反向移位(即,根据-cx和-cy来移动该中心)。
应指出,旋转可以旋转任意角度,只要使得旋转后的图像中的源对象和幻影反射位置分别与之前图像中的幻影反射位置和源对象位置重叠即可。作为一种优选的示例,旋转可以旋转180°。根据一种实现,移位和旋转使得移位和旋转后的图像中的幻影与原图像中的对象的位置相对应,而移位和旋转后的图像中的对象与原图像中的幻影的位置相对应。这样,由于移位和旋转后的图像对象位置与原图像的幻影位置对齐,这样可以利用强度因子对白色斑块的高强度进行加权,用加权强度值来抑制幻影位置处的低强度,从而有效地抑制幻影强度,实现幻影的消除。另一方面,利用强度因子对移位和旋转后的图像中的其他位置所进行的加权得到的加权强度值非常的小,在抑制原图像中的幻影过程中能够保证对原图像除幻影位置外的其他位置的强度值产生较小的影响。
然后,将变换后的图像与幻影反射补偿模型的幻影反射强度因子进行相乘(即上 述加权)。特别地,将幻影反射补偿模型中的各位置处的因子与变换后的图像的对应位置的像素强度进行相乘,以得到强度缩放后的图像。
最后,将原始待补偿图像与旋转并强度缩放后的图像相对应地进行强度相减。例如,从原始待补偿图像中的区域处的强度减去强度缩放后的图像的对应区域(例如移位和旋转后对应位置的像素位置处)的强度。从而可以得到经补偿后的图像,并能够计算其中的强度变异,尤其是幻影反射位置和该幻影反射位置周边相邻区域相比的强度变异(以及可选地或者附加地,幻影反射消解程度)。
将上述确定操作过程类似地应用于其他校准用图像,可以得到在此次模型提取操作中各个图像的强度变异(以及可选地或者附加地,幻影反射消解程度),然后可以判断这些图像的强度变异(以及可选地或者附加地,幻影反射消解程度)的统计数据是否满足特定条件。
作为一个示例,如果是阈值条件,则判断统计图像的强度变异数据是否小于预定阈值,和/或幻影的消解程度是否大于对应预定阈值。如果是,则可以认为当前所采用的补偿模型是所希望,然后停止模型提取操作,并且将该希望的模型作为在实际拍摄过程中使用的幻影反射补偿模型。如果不是,可以步进式地调整模型的参数,然后重复上述过程,直到强度变异的统计数据满足阈值要求即可。
作为另一个示例,如果最小化条件,则如果此次提取操作所确定的强度变异的统计数据相比于前一次不再变小,以及/或者幻影的消解程度的统计数据相比于前一次不再变大,则可以认为强度变异的统计数据被最小化,幻影的消解程度的统计数据被最大化,而停止模型提取操作,并且将前一次操作所对应的补偿模型作为最终的补偿模型。
应指出,上述迭代操作中模型参数的初值、步进大小等等可以被设定为任何适当的值,只要其有助于迭代收敛即可。此外,在迭代操作中,每次可以同时改变全部模型参数,或者每次仅改变一个或者多个参数。前者可以对应于通过迭代来同时确定全部的模型参数的情况,后者可以对应于如下情况,首先通过迭代来确定一个或多个参数,然后在此基础上通过迭代确定其他的参数。
根据另一种实现,可以利用幻影反射补偿模型构建最小化方程,并且当通过求解方程得到使所有场景的幻影得以预定程度的消解而图像强度变异最小的解时,将得到希望的幻影反射补偿模型。作为示例,cx,cy,以及各高斯函数的权重中的至少一个可以作为变量来构建方程组。
作为示例,可以将图像中的强度分布表示成向量或矩阵,则如上参照图10A描述的图像与模型相乘可以表示为数学意义上的向量或矩阵乘法,由此可以将图10A所描述的确定强度变异的操作用向量或矩阵方式表示出来,从而可以应用适当的方法来进行最小化求解,例如最小二乘法等。
根据本公开的实施例,幻影反射补偿模型可以是在摄像装置被用户使用之前被确定的,例如生产过程,出厂测试时,等等。作为示例,可以是在生产过程中与其它校准工作(例如,对于ToF相机,温度补偿、相位梯度、循环误差等等)一起进行的。这样,该幻影反射补偿模型可以被预定构建好,并存储在图像摄像装置中,例如相机中。
根据本公开的实施例,幻影反射补偿模型可以在摄像装置被用户使用期间被确定。作为示例,在用户初次使用时,可以提示用户进行相机校准。从而用户可以按照操作指示来拍摄校准图像,由此从所拍摄的图像中提取出幻影反射补偿模型。作为另一示例,在用户拍摄了特定数量的图像(例如,快门使用了特定次数等)的情况下,可以提示用户进行模型更新。
根据本公开的实施例,幻影反射补偿模型可以在摄像装置进行产品维护服务中被更新或推送。作为示例,在既有幻影反射补偿功能的摄像装置更换摄像装置的照相滤光器和/或透镜时,或者在未有幻影反射补偿功能的摄像装置进行软件升级等等时,可以执行上述模型提取过程以进行模型更新或建立。
上文已经描述了幻影反射补偿模型可以等同于表征在摄像装置中造成导致幻影反射的光反射的部件的特性,特别地是透镜和摄影滤光器特性。从某种意义上而言,幻影反射补偿模型与摄像装置中的透镜和/或照相滤光器是相对应的。根据一个实施例,如果相机包含的滤光器和透镜、尤其是透镜,是固定不变的情况下,则所构建的幻影反射补偿模型可以相对固定的,尤其是在拍摄过程中保持不变的。根据另一实施例,在相机的滤光器和/或透镜是可更换的情况下,在摄像装置更换这样的部件时,幻影反射补偿模型也需要被相应地更新。根据一个实施例,在更换部件时可以自动或者提示用户来提取与更换后的部件相对应的幻影补偿模型,可以如上所述地执行。根据另一种实施例,可以自动选用与更换后的部件相对应的模型。例如在相机系统中预先存储与适用于该相机系统的所有滤光器和/或透镜对应的幻影补偿模型的集合。这样,在相机系统更换了滤光器和/或透镜之后,可以从所存储的集合中自动选择与更换后的滤光器和/或透镜相对应的幻影反射补偿模型来进行应用。根据还另一实施例, 考虑到滤光器和/或透镜的更换往往可能造成光学特性的一定程度的改变,例如透镜特征可能改变,继而影响中心偏移参数,因此在进行滤光器和/或透镜更换时,即使预先存储了对应的模型,仍可自动或者提示用户来提取新的模型,而不进行自动选择。例如可以通过系统预先设置,或者用户提示来进行此操作。例如,可以在系统中预先设置在任何情况下都自动更新模型。或者例如,可以提示用户是进行模型校准,还是自动选择模型。
在如上所述的确定了根据本公开的幻影反射补偿模型之后,则可以应用该模型来进一步优化拍摄图像,提高图像质量。
根据本公开的实施例,在对图像加权的操作中,可以对于拍摄图像中的每个子区域,利用幻影反射补偿模型中的对应的幻影反射强度缩放因子进行强度缩放,从而获得经强度缩放的图像作为经加权的图像。
根据一个实施例,在对图像加权的操作中,可以将待补偿图像进行旋转;以及使用幻影反射补偿模型对旋转后的图像进行加权(例如,相乘)以得到经加权的图像。根据一种实施例,通过将待补偿图像和经加权的图像中的对应位置处的像素强度进行相减以获得经补偿的图像。
应指出,这里的旋转、相乘和相减等操作可以与上文参照图10描述的方式类似的方式来进行,只是左侧的输入图像为待补偿的拍摄图像,右侧的输出图像为补偿后的拍摄图像,其中幻影发射已经被有效地消除。特别地,依照幻影反射补偿模型的中心参数cx和cy来移位和旋转待补偿图像,也就是说旋转可以是围绕一个偏离图像中心的轴进行的。在进行相乘操作时,将变换后的图像的各位置与相应的幻影反射补偿模型因子进行相乘,例如可以在模型图示与变换后的图像对齐后进行相乘以进行强度缩放。
根据一些实施例,根据本公开的幻影反射补偿操作可以在散射补偿之前或者之后进行,而能够取得基本相似的有利效果。图11(a)示出了在散射补偿之前进行幻影反射补偿,也即是说上述的待补偿图像是由ToF传感器得到的原始图像。而图11(b)示出了在散射补偿之后进行幻影反射补偿,也即是说上述的待补偿图像是已经进行了散射补偿的图像。
根据本公开的实施例,该方法还包括散射补偿步骤,用于补偿图像中的散射。根据一些实施例,根据本公开的摄像装置为使用拍摄滤光器的摄像装置。根据一些实施例,该拍摄装置包括ToF传感器,待补偿的图像包括深度图像。
前述的示例中主要描述了一个场景的待补偿图像是一个图像的情况。但是,本发明的实施例同样可以用于对于一个场景存在至少两个待补偿图像的情况。
根据一些实施例,场景拍摄所获得的原始图像数据可对应于至少两个子图像,并且对于每个子图像执行根据本公开的幻影反射补偿操作,包括上述的计算和补偿步骤,由此得到经补偿的至少两个子图像。可以通过组合经补偿的至少两个子图像来得到最终的对应于该场景的补偿图像。
作为示例,所述至少两个子图像包括待对应于原始图像数据的I图像和Q图像。以下将以I和Q图像为例来说明针对子图像的补偿。图12示出了包含针对I和Q图像进行幻影反射补偿的操作示例。
为了能够测量距离,iToF传感器通常需要捕获4个分量。这些分量是关于在预定义相移的情况下发射器(激光器)之间的相位偏移以及光到传感器的返回。这4个分量分别是针对0度,90度,180度和270度被记录。获得这4个分量作为ToF原始数据。
根据这些原始数据,我们可以计算I和Q图像。其中I可以指示所捕获的同相数据,例如I图像是针对0度的分量和针对180度的分量的组合,Q可以指示所捕获的正交相位的数据,例如Q图像是针对90度的分量和针对270度的分量的组合。作为一个示例,它们的计算如下:
I=分量(0度)-分量(180度)
Q=分量(90度)-分量(270度)
然后分别针对I和Q图像进行补偿,具体补偿方式可如上参照图10所描述的方式进行,尤其是分别针对I和Q图像中的每个,分别执行上述的幻影反射补偿操作,这里将不再详细描述。
最后,利用补偿后的I和Q图像用于生成置信度图像,以及深度图像。
作为一个示例,置信度图像是根据I和Q计算的,如下所示:
置信度=abs(I)+abs(Q)
其中abs()指示绝对值函数,分别指示I图像和Q图像中的各子区域或者像素点的置信度的绝对值。作为示例,置信度图还可以采用本领域已知的其它方式来获得,这里将不再详细描述。
作为示例,深度图可从I和Q图像的至少一者来获得,其可以采用本领域已知的方式来获得,这里将不再详细描述。
但是应指出,I和Q图像仅仅是示例性的。其它的子图像也是可以的,只要其能够从摄像装置拍摄获得并且能够组合得到置信度图像和深度图即可。
图13示出了根据本公开的实施例的幻影反射补偿的有益效果。其中分别示出了对应于图4A中所示的图像的置信度图像和深度图。左侧部分自上而下分别指示原始场景图所对应的置信度图和深度图,其中包含了散射和幻影反射,中间部分自上而下分别指示经补偿处理后的置信度图和深度图,从中可见,由于主要是进行散射补偿,因此即使通过散射补偿去除了图像中的散射噪声,图像左侧中仍然存在幻影反射。右侧部分自上而下分别指示利用本公开的方法进行补偿的置信度图和深度图,从中可见,通过本公开的方法有效去除了图像中的幻影反射,可以获得高质量的输出图像。
上文主要使用iToF传感器中的示例描述了幻影反射的问题以及幻影反射补偿操作。但是如上所述的,幻影反射不取决于发射器或图像传感器,而是主要依赖于摄像装置中的导致光反射的部件,尤其是它主要取决于相机中的照相滤光器和/或透镜的使用。这意味着可以在其他使用光的3D测量系统上观察到此现象,包括(但不限于):
·使用全视场发射器的间接ToF传感器,
·使用点ToF发射器的间接ToF传感器,
·直接ToF传感器,
·结构光传感器,
·其它类型的ToF传感器。
以下将参照附图描述根据本公开的实施例的针对其它类型的ToF传感器的幻影反射补偿。
图14A和14B示出了在利用直接ToF(dToF)传感器进行图像拍摄时的幻影反射补偿。
与iTOF不同,dToF在短时间内集中了光能。它包括通过激光或LED的短脉冲生成光子包,以及直接计算这些光子到达目标并返回的传播时间。然后,可以采用适当的技术用于将多个事件累积到直方图中,以便在通常均匀分布的背景噪声上识别目标峰位置。例如,该技术可以是被称为时间相关单光子计数(TCSPC)的技术,TCSPC技术是本领域已知的技术,这里将不再详细描述
对于dToF而言,幻影反射可表现为受影响像素的直方图中的“幻影峰值”。如图14A所示,上部的直方图对应于视角场(FoV)中具有高强度的对象的直方图,其中高峰值指示其对应深度。下部的直方图对应于出现幻影反射位置处的直方图,其中 由于幻影反射的影响也出现深度峰值,从而会导致错误的深度检测。
根据本公开的实施例,可以针对dToF进行幻影反射补偿。特别地,对于由捕获数据得到的像素直方图,利用与前文所述的方式进行幻影反射补偿,例如进行移位和旋转、相乘、相减等操作进行幻影反射补偿,如图14B所示。从右侧的输出直方图可见,通过本公开的幻影反射补偿,幻影反射对应的直方图被显著抑制,其峰值远小于真实对象的峰值,从而不会造成错误的深度检测。
根据一个实施例,对于dToF,其幻影反射补偿模型同样可以如前文参照图10所述的操作来生成,只是输入的是捕获数据得到的像素直方图。而且,上文所述的任何其它的操作同样适用于dToF。根据另一个实施例,考虑到幻影反射补偿模型主要对应于拍摄装置中的引起导致幻影反射的光反射的部件,例如透镜和/或滤光器,因此在使用任一种ToF传感器针对该部件获得了幻影反射补偿模型之后,该模型可以应用于其他类型的ToF传感器,甚至是使用该部件的其它类型的传感器。
图15示出了在利用点ToF(spotToF)传感器进行图像拍摄时的幻影反射补偿。图15中(a)为在不存在近距物体时拍摄的置信度图,(b)示出了理想的存在近距物体时获取的置信度图,其中即使场景中存在近距物体,除了近距物体本身的点信息之外,不会存在其他的点信息。(c)为存在幻影反射的置信度图,其中可见当在场景的右侧存在近距物体时,在场景的左侧会出现新点,这些新点是通过幻影反射生成的,它们可能会产生错误的深度,或者可能会将信号与当前斑点混合。(d)为利用本公开的方案进行幻影反射补偿后的置信度图,其中幻影反射导致的新点被有效地去除,提高了图像质量。
根据本公开的实施例,根据本公开的幻影反射补偿功能可以自动地使用,或者由用户选择使用。
作为示例,本发明的幻影补偿功能可以自动实现。例如,幻影补偿功能可以与相机的特定拍摄模式进行关联,而在拍摄过程中开启该拍摄模式时会自动启动该幻影补偿功能。例如,在近景拍摄模式,例如微距,人像等模式中会自动开启幻影补偿功能,而在远景拍摄模式,例如风景等模式中则不会自动开启幻影补偿功能。作为另一示例,相机也可以根据与拍摄对象的距离来判断是否自动开启幻影补偿功能。例如,当与拍摄对象的距离大于特定距离阈值时,则可以认为是远景拍摄而不用开启幻影补偿功能,而当与拍摄对象的距离小于特定距离阈值时,则可以认为是近景拍摄而不用开启幻影补偿功能。
作为示例,本发明的幻影补偿功能可以由用户设定。例如,在相机的拍照操作界面上会出现提示,以提示用户是否开启幻影补偿功能。当用户选择该功能时,就可以开启幻影补偿功能以在拍照时进行幻影补偿/消除。例如,在触摸式用户操作界面上出现的按钮,或者在相机上的可以起到幻影补偿功能的按钮。
根据本公开的实施例,根据本公开的幻影反射补偿模型可被多种方式存储。作为以供示例,该模型可以与摄像装置、尤其是包含透镜和滤光器的相机镜头,固化在一起,这样即使相机镜头更换到其它设备上仍可固定地使用该模型,无需在进行模型提取。另一方面,该模型可以存放在能够与摄像装置连接以进行拍照的设备中,例如便携式电子设备等。
如上所述,幻影反射对于获取深度信息是尤其不利的,因此,本公开的技术方案尤其适合于需要获得拍摄场景中对象的深度信息的各种应用,诸如需要测量深度信息的摄像装置等等。例如,本公开的技术方案可适合于采用了基于ToF技术的传感器的摄像装置,诸如iToF、全视场ToF(Full-field ToF)、点ToF(spotToF)等。例如,本公开的技术方案可以适合于3D摄像装置,因为深度/距离信息对于获得良好的3D图像是非常重要的。
注意,即使在RGB传感器的情况下反射幻影影响不如3D测量系统中那么严峻,但是本公开的实施例同样可应用于RGB传感器,尤其是当系统中仅存在一个照相滤光器时,例如可以应用于便携式移动设备,其相机配备有盖玻璃(coverglass),该场景中盖玻璃实现为滤光器。
此外,本公开的方案可以应用于有可能产生幻影反射的某些特定的拍摄模式。例如,鉴于在拍摄近距物体时可能导致大量的光反射,继而可能导致幻影反射,根据本公开的幻影反射补偿方案还尤其适合于摄像装置的与近距物体拍摄相关的模式,例如近景拍摄模式,背景虚化模式等等。
通过本公开的方案,可以有效消除图像中的幻影反射影响。特别地,通过本公开的方案,可以准确确定场景中对象的深度信息,即距离信息,从而可以在拍照时准确对焦,或者获得高质量的图像,以有助于后续的基于图像的应用。
例如,对于背景虚化效果,本公开的方案可以消除错误的深度并获得适当的对象距离。例如,对于自动对焦应用程序,即使物体靠近相机,也可以准确识别物体距离,使用良好的对焦距离来进行拍照。例如,对于面部ID识别,当拍照对象靠近相机(例如桌子)进行面部识别时,本公开的方案可以有效地去除图像中的幻影,继而获得高 质量的图像以进行识别。
应指出,本公开的技术方案尤其适用于便携式设备中的相机,例如手机,平板电脑等等设备中的相机。该相机的镜头和/或相机滤光器可以是固定的,也可以是可更换的。
以下将描述根据本公开的能够进行幻影反射补偿的电子设备。图7示出了根据本公开的实施例的能够进行幻影反射补偿的电子设备的框图。电子设备700包括处理电路720,该处理电路720可被配置为通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权;以及组合待补偿图像和经加权的图像以消除图像中的幻影反射。
在上述装置的结构示例中,处理电路720可以是通用处理器的形式,也可以是专用处理电路,例如ASIC。例如,处理电路120能够由电路(硬件)或中央处理设备(诸如,中央处理单元(CPU))构造。此外,处理电路720上可以承载用于使电路(硬件)或中央处理设备工作的程序(软件)。该程序能够存储在存储器(诸如,布置在存储器中)或从外面连接的外部存储介质中,以及经由网络(诸如,互联网)下载。
根据本公开的实施例,处理电路720可以包括用于实现上述功能的各个单元,例如计算单元722,用于通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权;以及幻影反射补偿单元724,用于组合待补偿图像和经加权的图像以消除图像中的幻影反射。特别地,处理电路720还可以包括散射补偿单元726以及数据路径处理单元728。每个单元可以进行如上文所述地操作,这里将不再详细描述。
散射补偿单元726以及数据路径处理单元728用虚线绘出,旨在说明该单元并不一定被包含在处理电路中,作为示例,该单元可以在终端侧电子设备中而处理电路之外,甚至可以位于电子设备700之外。需要注意的是,尽管图7中将各个单元示为分立的单元,但是这些单元中的一个或多个也可以合并为一个单元,或者拆分为多个单元。
应注意,上述各个单元仅是根据其所实现的具体功能划分的逻辑模块,而不是用于限制具体的实现方式,例如可以以软件、硬件或者软硬件结合的方式来实现。在实际实现时,上述各个单元可被实现为独立的物理实体,或者也可由单个实体(例如,处理器(CPU或DSP等)、集成电路等)来实现。此外,上述各个单元在附图中用虚线示出指示这些单元可以并不实际存在,而它们所实现的操作/功能可由处理电路本身来实现。
应理解,图7仅仅是终端侧电子设备的概略性结构配置,电子设备700还可以包 括其他可能的部件(例如,存储器等)。可选地,终端侧电子设备700还可以包括未示出的其它部件,诸如存储器、网络接口、控制器等。处理电路可以与存储器相关联。例如,处理电路可以直接或间接(例如,中间可能连接有其它部件)连接到存储器,以进行数据的存取。
存储器可以存储由处理电路720产生的各种信息。存储器还可以位于终端侧电子设备内但在处理电路之外,或者甚至位于终端侧电子设备之外。存储器可以是易失性存储器和/或非易失性存储器。例如,存储器可以包括但不限于随机存储存储器(RAM)、动态随机存储存储器(DRAM)、静态随机存取存储器(SRAM)、只读存储器(ROM)、闪存存储器。
以下将描述根据本公开的摄像装置。图16示出了根据本公开的实施例的摄像装置的框图。摄像装置1600包括补偿装置700,其可用于图像补偿处理、尤其是幻影反射补偿,该补偿装置可以由电子设备实现,诸如上文所述的电子设备700。
该摄像装置1600可以包括透镜单元1602,其可包括本领域中已知的各种光学透镜,用于通过光学成像而在传感器上进行物体成像。
该摄像装置1600可以包括照相滤光器1604,其可以包括本领域已知的各种照相滤光器,其可以安装到透镜前部。
该摄像装置1600还可以包括处理电路1606,其可以用于对获得的图像进行处理。作为示例,可以对进行补偿后的图像进行进一步的处理,或者对于待补偿图像进行预处理。
该摄像装置1600还可以包括各种图像传感器,例如前述的各种基于ToF技术的传感器。但是,这些传感器也可以位于摄像装置1600之外。
应指出,照相滤光器和处理电路用虚线绘出,旨在说明该单元并不一定被包含在摄像装置1600中,甚至可以在摄像装置1600之外而通过已知的方式进行连接和/或通信。需要注意的是,尽管图16中将各个单元示为分立的单元,但是这些单元中的一个或多个也可以合并为一个单元,或者拆分为多个单元。
在上述装置的结构示例中,处理电路1606可以是通用处理器的形式,也可以是专用处理电路,例如ASIC。例如,处理电路1606能够由电路(硬件)或中央处理设备(诸如,中央处理单元(CPU))构造。此外,处理电路1606上可以承载用于使电路(硬件)或中央处理设备工作的程序(软件)。该程序能够存储在存储器(诸如,布置在存储器中)或从外面连接的外部存储介质中,以及经由网络(诸如,互联网) 下载。
本公开的技术能够应用于各种产品。
例如,本公开的技术能够应用于摄像装置本身,例如内置于相机镜头中,与相机镜头集成在一起,这样,本公开的技术可以以软件程序的形式以便由摄像装置的处理器来执行,或者以集成电路、处理器的形式集成在一起;或者用于与摄像装置相连接的设备中,例如安装有该摄像装置的便携式移动设备,这样,本公开的技术可以以软件程序的形式以便由摄像装置的处理器来执行,或者以集成电路、处理器的形式集成在一起,甚至集成在已有的处理电路中,用于在拍照过程中进行幻影反射补偿。
本公开的技术可以应用于各种摄像装置中,例如安装在便携式设备的镜头,无人机上的拍摄装置,监控设备等中的拍摄装置,等等。
本发明可被用于许多应用。例如,本发明可被用于监测、识别、跟踪照相机捕获的静态图像或移动视频中的对象,并且对于配备有相机的便携式设备、(基于相机)的移动电话等等是尤其有利的。
另外,应当理解,上述系列处理和设备也可以通过软件和/或固件实现。在通过软件和/或固件实现的情况下,从存储介质或网络向具有专用硬件结构的计算机,例如图17所示的通用个人计算机1300安装构成该软件的程序,该计算机在安装有各种程序时,能够执行各种功能等等。图17是示出根据本公开的实施例的中可采用的信息处理设备的个人计算机的示例结构的框图。在一个例子中,该个人计算机可以对应于根据本公开的上述示例性发射设备或终端侧电子设备。
在图17中,中央处理单元(CPU)1301根据只读存储器(ROM)1302中存储的程序或从存储部分1308加载到随机存取存储器(RAM)1303的程序执行各种处理。在RAM 1303中,也根据需要存储当CPU 1301执行各种处理等时所需的数据。
CPU 1301、ROM 1302和RAM 1303经由总线1304彼此连接。输入/输出接口1305也连接到总线1304。
下述部件连接到输入/输出接口1305:输入部分1306,包括键盘、鼠标等;输出部分1307,包括显示器,比如阴极射线管(CRT)、液晶显示器(LCD)等,和扬声器等;存储部分1308,包括硬盘等;和通信部分1309,包括网络接口卡比如LAN卡、调制解调器等。通信部分1309经由网络比如因特网执行通信处理。
根据需要,驱动器1310也连接到输入/输出接口1305。可拆卸介质1311比如磁盘、光盘、磁光盘、半导体存储器等等根据需要被安装在驱动器1310上,使得从中 读出的计算机程序根据需要被安装到存储部分1308中。
在通过软件实现上述系列处理的情况下,从网络比如因特网或存储介质比如可拆卸介质1311安装构成软件的程序。
本领域技术人员应当理解,这种存储介质不局限于图17所示的其中存储有程序、与设备相分离地分发以向用户提供程序的可拆卸介质1311。可拆卸介质1311的例子包含磁盘(包含软盘(注册商标))、光盘(包含光盘只读存储器(CD-ROM)和数字通用盘(DVD))、磁光盘(包含迷你盘(MD)(注册商标))和半导体存储器。或者,存储介质可以是ROM 1302、存储部分1308中包含的硬盘等等,其中存有程序,并且与包含它们的设备一起被分发给用户。
应指出,文中所述的方法和设备可被实现为软件、固件、硬件或它们的任何组合。有些组件可例如被实现为在数字信号处理器或者微处理器上运行的软件。其他组件可例如实现为硬件和/或专用集成电路。
另外,可采用多种方式来实行本发明的方法和系统。例如,可通过软件、硬件、固件或它们的任何组合来实行本发明的方法和系统。上文所述的该方法的步骤的顺序仅是说明性的,并且除非另外具体说明,否则本发明的方法的步骤不限于上文具体描述的顺序。此外,在一些实施例中,本发明还可具体化为记录介质中记录的程序,包括用于实施根据本发明的方法的机器可读指令。因此,本发明还涵盖了存储用于实施根据本发明的方法的程序的记录介质。这样的存储介质可以包括但不限于软盘、光盘、磁光盘、存储卡、存储棒等等。
本领域技术人员应当意识到,在上述操作之间的边界仅仅是说明性的。多个操作可以结合成单个操作,单个操作可以分布于附加的操作中,并且操作可以在时间上至少部分重叠地执行。而且,另选的实施例可以包括特定操作的多个实例,并且在其他各种实施例中可以改变操作顺序。但是,其它的修改、变化和替换同样是可能的。因此,本说明书和附图应当被看作是说明性的,而非限制性的。
另外,本公开的实施方式还可以包括以下示意性示例(EE)。
EE 1.一种用于补偿通过摄像装置拍摄的图像中的幻影反射的电子设备,包括处理电路,被配置为:
通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权,其中所述幻影反射补偿模型与进行拍摄时所述摄像装置中的光反射导致的幻影反射在图像中的强度分布有关;以及
组合待补偿图像和经加权的图像以消除图像中的幻影反射。
EE2、根据EE1所述的电子设备,其中,所述幻影反射补偿模型是从预定数量的校准图像中训练得到的,并且所述幻影反射补偿模型被训练为使得在应用该幻影反射补偿模型后校准图像中的幻影反射区域相比于幻影反射区域的相邻区域的强度变异小于特定阈值或者最小。
EE3、根据EE2所述的电子设备,其中,在幻影反射补偿模型的训练中通过如下操作来确定强度变异:
根据预先设定的待训练的幻影反射补偿模型的中心移位参数将校准图像进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的图像进行反向中心移位;以及
将移位和旋转后的图像乘以使用具有预先设定的强度因子参数的幻影反射补偿模型;以及
将校准图像和乘以模型后得到的图像中的对应位置处的像素强度进行相减以获得强度变异。
EE4、根据EE 1所述的电子设备,其中,所述幻影反射补偿模型包含对应于该图像中各子区域的幻影反射因子,其中所述子区域包含至少一个像素。
EE5、根据EE 4所述的电子设备,其中,所述幻影反射补偿模型被设定为使得靠近图像中央的子区域处的幻影反射因子大于靠近图像边缘的子区域处的幻影反射因子。
EE6、根据EE 4所述的电子设备,其中,所述幻影反射因子是基于高斯分布被确定的。
EE7、根据EE 4-6中任一项所述的电子设备,其中,所述处理电路被配置为:
对于拍摄图像中的每个子区域,利用幻影反射补偿模型中的对应的幻影反射强度因子进行强度缩放,从而获得经强度缩放的图像作为经加权的图像。
EE8、根据EE 1所述的电子设备,其中所述幻影反射模型与所述拍摄装置中的在拍照时造成导致幻影反射的光反射的部件的特性相关,并且其中,所述幻影反射模型的参数依赖于该部件的特性。
EE9、根据EE 8所述的电子设备,其中,所述部件包括透镜和照相滤光器中的至少一个。
EE10、根据EE 8或9所述的电子设备,其中,所述幻影反射模型的模型至少包 括与中心偏移相关的参数,以及用于确定幻影反射因子的高斯分布的相关参数。
EE11、根据EE 1所述的电子设备,其中,所述幻影反射模型包括与中心偏移相关的参数,并且其中,所述处理电路被配置为:
根据该参数将待补偿图像进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的图像进行反向中心移位;以及
使用幻影反射补偿模型对移位和旋转后的图像进行加权以得到经加权的图像。
EE12、根据EE 11所述的电子设备,其中,上述移位和旋转使得移位和旋转后的图像中的幻影与原图像中的对象的位置相对应,而移位和旋转后的图像中的对象与原图像中的幻影的位置相对应。
EE13、根据EE 1所述的电子设备,其中,所述处理电路被配置为:
通过将待补偿图像和经加权的图像中的对应位置处的像素强度进行相减以获得经补偿的图像。
EE14、根据EE 1所述的电子设备,其中,所述待补偿图像对应于至少两个子图像,
并且对于每个子图像执行幻影反射补偿,由此通过组合补偿后的至少两个子图像来得到补偿的图像。
EE15、根据EE 14所述的电子设备,其中,所述至少两个子图像包括通过拍摄原始图像数据获得的I图像和Q图像。
EE16、根据EE 1所述的电子设备,其中,所述摄像装置为使用照相滤光器的光学摄像装置。
EE17、根据EE 1-16中任一项所述的电子设备,其中,所述摄像装置包括ToF传感器,所述图像包括深度图像。
EE18、一种用于补偿通过摄像装置拍摄的图像中的幻影反射的方法,包括以下步骤:
计算步骤,用于通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权,其中所述幻影反射补偿模型与进行拍摄时所述摄像装置中的光反射导致的幻影反射在图像中的强度分布有关;以及
补偿步骤,用于组合待补偿图像和经加权的图像以消除图像中的幻影反射。
EE19、根据EE 18所述的方法,其中,所述幻影反射补偿模型是从预定数量的校准图像中训练得到的,并且所述幻影反射补偿模型被训练为使得在应用该幻影反射补 偿模型后校准图像中的幻影反射区域相比于幻影反射区域的相邻区域的强度变异小于特定阈值或者最小。
EE20、根据EE 19所述的方法,其中,在幻影反射补偿模型的训练中通过如下操作来确定强度变异:
根据预先设定的待训练的幻影反射补偿模型的中心移位参数将校准图像进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的图像进行反向中心移位;以及
将移位和旋转后的图像乘以使用具有预先设定的强度因子参数的幻影反射补偿模型;以及
将校准图像和乘以模型后得到的图像中的对应位置处的像素强度进行相减以获得强度变异。
EE21、根据EE 18所述的方法,其中,所述幻影反射补偿模型包含对应于该图像中各子区域的幻影反射因子,其中所述子区域包含至少一个像素。
EE22、根据EE 20所述的方法,其中,所述幻影反射补偿模型被设定为使得靠近图像中央的子区域处的幻影反射因子大于靠近图像边缘的子区域处的幻影反射因子。
EE23、根据EE 20所述的方法,其中,所述幻影反射因子是基于高斯分布被确定的。
EE24、根据EE 20-22中任一项所述的方法,其中,所述计算步骤进一步包括:
对于拍摄图像中的每个子区域,利用幻影反射补偿模型中的对应的幻影反射强度因子进行强度缩放,从而获得经强度缩放的图像作为经加权的图像。
EE25、根据EE 18所述的方法,其中所述幻影反射模型与所述拍摄装置中的在拍照时造成导致幻影反射的光反射的部件的特性相关,并且其中,所述幻影反射模型的参数依赖于该部件的特性。
EE26、根据EE 24所述的方法,其中,所述部件包括透镜和照相滤光器中的至少一个。
EE27、根据EE 24或25所述的方法,其中,所述幻影反射模型的模型至少包括与中心偏移相关的参数,以及用于确定幻影反射因子的高斯分布的相关参数。
EE28、根据EE 18所述的方法,其中,所述幻影反射模型包括与中心偏移相关的参数,并且其中,所述计算步骤进一步包括:
根据该参数将待补偿图像进行中心移位,以移位后的中心为轴进行旋转,然后根 据该参数将旋转后的图像进行反向中心移位;以及
使用幻影反射补偿模型对移位和旋转后的图像进行加权以得到经加权的图像。
EE29、根据EE 28所述的方法,其中,上述移位和旋转使得移位和旋转后的图像中的幻影与原图像中的对象的位置相对应,而移位和旋转后的图像中的对象与原图像中的幻影的位置相对应。
EE30、根据EE 18所述的方法,其中,所述补偿步骤包括:
通过将待补偿图像和经加权的图像中的对应位置处的像素强度进行相减以获得经补偿的图像。
EE31、根据EE 18所述的方法,其中,所述待补偿图像对应于至少两个子图像,
并且对于每个子图像执行幻影反射补偿,由此通过组合补偿后的至少两个子图像来得到补偿的图像。
EE32、根据EE 30所述的方法,其中,所述至少两个子图像包括通过拍摄原始图像数据获得的I图像和Q图像。
EE33、根据EE 18所述的方法,其中,所述摄像装置为使用照相滤光器的光学摄像装置。
EE34、根据EE 18-33中任一项所述的方法,其中,所述摄像装置包括ToF传感器,所述图像包括深度图像。
EE35、一种用于对于利用直接飞行时间(dToF)传感器的图像拍摄进行幻影反射补偿的电子设备,包括处理电路,被配置为:
通过使用幻影反射补偿模型对由拍摄的原始数据得到的包含幻影反射的待补偿的像素直方图进行加权;以及
组合待补偿的像素直方图和经加权的像素直方图以消除幻影反射。
EE36、根据EE 35所述的电子设备,其中,所述幻影反射模型包括与中心偏移相关的参数,并且其中,所述处理电路被配置为:
根据该参数将待补偿直方图进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的直方图进行反向中心移位;以及
使用幻影反射补偿模型对移位和旋转后的直方图进行加权以得到经加权的直方图。
EE37、根据EE 35所述的电子设备,其中,上述移位和旋转使得移位和旋转后的直方图中的幻影反射峰值与原直方图中的对象峰值相对应。
EE38、根据EE 35所述的电子设备,其中,所述处理电路被配置为:
通过将待补偿的直方图和经加权的直方图中的对应位置处的值进行相减以获得经补偿的直方图。
EE39、一种用于对于利用直接飞行时间(dToF)传感器的图像拍摄进行幻影反射补偿的方法,包括:
计算步骤,通过使用幻影反射补偿模型对由拍摄的原始数据得到的包含幻影反射的待补偿的像素直方图进行加权;以及
补偿步骤,组合待补偿的像素直方图和经加权的像素直方图以消除幻影反射。
EE40、根据EE 39所述的方法,其中,所述幻影反射模型包括与中心偏移相关的参数,并且其中,所述计算步骤进一步包括:
根据该参数将待补偿直方图进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的直方图进行反向中心移位;以及
使用幻影反射补偿模型对移位和旋转后的直方图进行加权以得到经加权的直方图。
EE41、根据EE 39所述的方法,其中,上述移位和旋转使得移位和旋转后的直方图中的幻影反射峰值与原直方图中的对象峰值相对应。
EE42、根据EE 39所述的方法,其中,所述补偿步骤进一步包括:通过将待补偿的直方图和经加权的直方图中的对应位置处的值进行相减以获得经补偿的直方图。
EE43.一种设备,包括
至少一个处理器;和
至少一个存储设备,所述至少一个存储设备在其上存储指令,该指令在由所述至少一个处理器执行时,使所述至少一个处理器执行根据EE 18-34和39-42中任一项所述的方法。
EE44.一种存储指令的存储介质,该指令在由处理器执行时能使得执行根据EE18-34和39-42中任一项所述的方法。
虽然已经详细说明了本公开及其优点,但是应当理解在不脱离由所附的权利要求所限定的本公开的精神和范围的情况下可以进行各种改变、替代和变换。而且,本公开实施例的术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有 的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
虽然已详细描述了本公开的一些具体实施例,但是本领域技术人员应当理解,上述实施例仅是说明性的而不限制本公开的范围。本领域技术人员应该理解,上述实施例可以被组合、修改或替换而不脱离本公开的范围和实质。本公开的范围是通过所附的权利要求限定的

Claims (44)

  1. 一种用于补偿通过摄像装置拍摄的图像中的幻影反射的电子设备,包括处理电路,被配置为:
    通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权,其中所述幻影反射补偿模型与进行拍摄时所述摄像装置中的光反射导致的幻影反射在图像中的强度分布有关;以及
    组合待补偿图像和经加权的图像以消除图像中的幻影反射。
  2. 根据权利要求1所述的电子设备,其中,所述幻影反射补偿模型是从预定数量的校准图像中训练得到的,并且所述幻影反射补偿模型被训练为使得在应用该幻影反射补偿模型后校准图像中的幻影反射区域相比于幻影反射区域的相邻区域的强度变异小于特定阈值或者最小。
  3. 根据权利要求2所述的电子设备,其中,在幻影反射补偿模型的训练中通过如下操作来确定强度变异:
    根据预先设定的待训练的幻影反射补偿模型的中心移位参数将校准图像进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的图像进行反向中心移位;以及
    将移位和旋转后的图像乘以使用具有预先设定的强度因子参数的幻影反射补偿模型;以及
    将校准图像和乘以模型后得到的图像中的对应位置处的像素强度进行相减以获得强度变异。
  4. 根据权利要求1所述的电子设备,其中,所述幻影反射补偿模型包含对应于该图像中各子区域的幻影反射因子,其中所述子区域包含至少一个像素。
  5. 根据权利要求4所述的电子设备,其中,所述幻影反射补偿模型被设定为使得靠近图像中央的子区域处的幻影反射因子大于靠近图像边缘的子区域处的幻影反射因子。
  6. 根据权利要求4所述的电子设备,其中,所述幻影反射因子是基于高斯分布被确定的。
  7. 根据权利要求4-6中任一项所述的电子设备,其中,所述处理电路被配置为:
    对于拍摄图像中的每个子区域,利用幻影反射补偿模型中的对应的幻影反射强度因子进行强度缩放,从而获得经强度缩放的图像作为经加权的图像。
  8. 根据权利要求1所述的电子设备,其中所述幻影反射模型与所述拍摄装置中的在拍照时造成导致幻影反射的光反射的部件的特性相关,并且其中,所述幻影反射模型的参数依赖于该部件的特性。
  9. 根据权利要求8所述的电子设备,其中,所述部件包括透镜和照相滤光器中的至少一个。
  10. 根据权利要求8或9所述的电子设备,其中,所述幻影反射模型的模型至少包括与中心偏移相关的参数,以及用于确定幻影反射因子的高斯分布的相关参数。
  11. 根据权利要求1所述的电子设备,其中,所述幻影反射模型包括与中心偏移相关的参数,并且其中,所述处理电路被配置为:
    根据该参数将待补偿图像进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的图像进行反向中心移位;以及
    使用幻影反射补偿模型对移位和旋转后的图像进行加权以得到经加权的图像。
  12. 根据权利要求11所述的电子设备,其中,上述移位和旋转使得移位和旋转后的图像中的幻影与原图像中的对象的位置相对应,而移位和旋转后的图像中的对象与原图像中的幻影的位置相对应。
  13. 根据权利要求1所述的电子设备,其中,所述处理电路被配置为:
    通过将待补偿图像和经加权的图像中的对应位置处的像素强度进行相减以获得 经补偿的图像。
  14. 根据权利要求1所述的电子设备,其中,所述待补偿图像对应于至少两个子图像,
    并且对于每个子图像执行幻影反射补偿,由此通过组合补偿后的至少两个子图像来得到补偿的图像。
  15. 根据权利要求14所述的电子设备,其中,所述至少两个子图像包括通过拍摄原始图像数据获得的I图像和Q图像。
  16. 根据权利要求1所述的电子设备,其中,所述摄像装置为使用照相滤光器的光学摄像装置。
  17. 根据权利要求1-16中任一项所述的电子设备,其中,所述摄像装置包括ToF传感器,所述图像包括深度图像。
  18. 一种用于补偿通过摄像装置拍摄的图像中的幻影反射的方法,包括以下步骤:
    计算步骤,用于通过使用幻影反射补偿模型对包含幻影反射的待补偿图像进行加权,其中所述幻影反射补偿模型与进行拍摄时所述摄像装置中的光反射导致的幻影反射在图像中的强度分布有关;以及
    补偿步骤,用于组合待补偿图像和经加权的图像以消除图像中的幻影反射。
  19. 根据权利要求18所述的方法,其中,所述幻影反射补偿模型是从预定数量的校准图像中训练得到的,并且所述幻影反射补偿模型被训练为使得在应用该幻影反射补偿模型后校准图像中的幻影反射区域相比于幻影反射区域的相邻区域的强度变异小于特定阈值或者最小。
  20. 根据权利要求19所述的方法,其中,在幻影反射补偿模型的训练中通过如下操作来确定强度变异:
    根据预先设定的待训练的幻影反射补偿模型的中心移位参数将校准图像进行中 心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的图像进行反向中心移位;以及
    将移位和旋转后的图像乘以使用具有预先设定的强度因子参数的幻影反射补偿模型;以及
    将校准图像和乘以模型后得到的图像中的对应位置处的像素强度进行相减以获得强度变异。
  21. 根据权利要求18所述的方法,其中,所述幻影反射补偿模型包含对应于该图像中各子区域的幻影反射因子,其中所述子区域包含至少一个像素。
  22. 根据权利要求20所述的方法,其中,所述幻影反射补偿模型被设定为使得靠近图像中央的子区域处的幻影反射因子大于靠近图像边缘的子区域处的幻影反射因子。
  23. 根据权利要求20所述的方法,其中,所述幻影反射因子是基于高斯分布被确定的。
  24. 根据权利要求20-22中任一项所述的方法,其中,所述计算步骤进一步包括:
    对于拍摄图像中的每个子区域,利用幻影反射补偿模型中的对应的幻影反射强度因子进行强度缩放,从而获得经强度缩放的图像作为经加权的图像。
  25. 根据权利要求18所述的方法,其中所述幻影反射模型与所述拍摄装置中的在拍照时造成导致幻影反射的光反射的部件的特性相关,并且其中,所述幻影反射模型的参数依赖于该部件的特性。
  26. 根据权利要求24所述的方法,其中,所述部件包括透镜和照相滤光器中的至少一个。
  27. 根据权利要求24或25所述的方法,其中,所述幻影反射模型的模型至少包括与中心偏移相关的参数,以及用于确定幻影反射因子的高斯分布的相关参数。
  28. 根据权利要求18所述的方法,其中,所述幻影反射模型包括与中心偏移相关的参数,并且其中,所述计算步骤进一步包括:
    根据该参数将待补偿图像进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的图像进行反向中心移位;以及
    使用幻影反射补偿模型对移位和旋转后的图像进行加权以得到经加权的图像。
  29. 根据权利要求28所述的方法,其中,上述移位和旋转使得移位和旋转后的图像中的幻影与原图像中的对象的位置相对应,而移位和旋转后的图像中的对象与原图像中的幻影的位置相对应。
  30. 根据权利要求18所述的方法,其中,所述补偿步骤包括:
    通过将待补偿图像和经加权的图像中的对应位置处的像素强度进行相减以获得经补偿的图像。
  31. 根据权利要求18所述的方法,其中,所述待补偿图像对应于至少两个子图像,
    并且对于每个子图像执行幻影反射补偿,由此通过组合补偿后的至少两个子图像来得到补偿的图像。
  32. 根据权利要求30所述的方法,其中,所述至少两个子图像包括通过拍摄原始图像数据获得的I图像和Q图像。
  33. 根据权利要求18所述的方法,其中,所述摄像装置为使用照相滤光器的光学摄像装置。
  34. 根据权利要求18-33中任一项所述的方法,其中,所述摄像装置包括ToF传感器,所述图像包括深度图像。
  35. 一种用于对于利用直接飞行时间(dToF)传感器的图像拍摄进行幻影反射补偿的电子设备,包括处理电路,被配置为:
    通过使用幻影反射补偿模型对由拍摄的原始数据得到的包含幻影反射的待补偿的像素直方图进行加权;以及
    组合待补偿的像素直方图和经加权的像素直方图以消除幻影反射。
  36. 根据权利要求35所述的电子设备,其中,所述幻影反射模型包括与中心偏移相关的参数,并且其中,所述处理电路被配置为:
    根据该参数将待补偿直方图进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的直方图进行反向中心移位;以及
    使用幻影反射补偿模型对移位和旋转后的直方图进行加权以得到经加权的直方图。
  37. 根据权利要求35所述的电子设备,其中,上述移位和旋转使得移位和旋转后的直方图中的幻影反射峰值与原直方图中的对象峰值相对应。
  38. 根据权利要求35所述的电子设备,其中,所述处理电路被配置为:
    通过将待补偿的直方图和经加权的直方图中的对应位置处的值进行相减以获得经补偿的直方图。
  39. 一种用于对于利用直接飞行时间(dToF)传感器的图像拍摄进行幻影反射补偿的方法,包括:
    计算步骤,通过使用幻影反射补偿模型对由拍摄的原始数据得到的包含幻影反射的待补偿的像素直方图进行加权;以及
    补偿步骤,组合待补偿的像素直方图和经加权的像素直方图以消除幻影反射。
  40. 根据权利要求39所述的方法,其中,所述幻影反射模型包括与中心偏移相关的参数,并且其中,所述计算步骤进一步包括:
    根据该参数将待补偿直方图进行中心移位,以移位后的中心为轴进行旋转,然后根据该参数将旋转后的直方图进行反向中心移位;以及
    使用幻影反射补偿模型对移位和旋转后的直方图进行加权以得到经加权的直方图。
  41. 根据权利要求39所述的方法,其中,上述移位和旋转使得移位和旋转后的直方图中的幻影反射峰值与原直方图中的对象峰值相对应。
  42. 根据权利要求39所述的方法,其中,所述补偿步骤进一步包括:通过将待补偿的直方图和经加权的直方图中的对应位置处的值进行相减以获得经补偿的直方图。
  43. 一种设备,包括
    至少一个处理器;和
    至少一个存储设备,所述至少一个存储设备在其上存储指令,该指令在由所述至少一个处理器执行时,使所述至少一个处理器执行根据权利要求18-34和39-42中任一项所述的方法。
  44. 一种存储指令的存储介质,该指令在由处理器执行时能使得执行根据权利要求18-34和39-42中任一项所述的方法。
PCT/CN2021/096220 2020-05-27 2021-05-27 幻影反射补偿方法及设备 WO2021239029A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
EP21812623.3A EP4138030A4 (en) 2020-05-27 2021-05-27 METHOD AND DEVICE FOR COMPENSATING PHANTOM REFLECTIONS
US17/926,174 US20230196514A1 (en) 2020-05-27 2021-05-27 Method and device for compensating for ghost reflection
JP2022572752A JP2023527833A (ja) 2020-05-27 2021-05-27 ゴースト反射補償方法及び機器
CN202180036034.4A CN115917587A (zh) 2020-05-27 2021-05-27 幻影反射补偿方法及设备

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010463386.5 2020-05-27
CN202010463386.5A CN113808024A (zh) 2020-05-27 2020-05-27 幻影反射补偿方法及设备

Publications (1)

Publication Number Publication Date
WO2021239029A1 true WO2021239029A1 (zh) 2021-12-02

Family

ID=78745615

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/096220 WO2021239029A1 (zh) 2020-05-27 2021-05-27 幻影反射补偿方法及设备

Country Status (5)

Country Link
US (1) US20230196514A1 (zh)
EP (1) EP4138030A4 (zh)
JP (1) JP2023527833A (zh)
CN (2) CN113808024A (zh)
WO (1) WO2021239029A1 (zh)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798658A (zh) * 2016-09-07 2018-03-13 三星电子株式会社 飞行时间测量装置和减少其中深度图像模糊的图像处理方法
US20180089847A1 (en) * 2016-09-23 2018-03-29 Samsung Electronics Co., Ltd. Time-of-flight (tof) capturing apparatus and image processing method of reducing distortion of depth caused by multiple reflection
WO2019164232A1 (ko) * 2018-02-20 2019-08-29 삼성전자주식회사 전자 장치, 이의 영상 처리 방법 및 컴퓨터 판독가능 기록 매체
CN110632614A (zh) * 2018-06-21 2019-12-31 美国亚德诺半导体公司 测量和去除由内部散射引起的飞行时间深度图像的破坏
CN110688763A (zh) * 2019-10-08 2020-01-14 北京工业大学 一种基于脉冲型ToF相机深度和光强图像的多径效应补偿方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107798658A (zh) * 2016-09-07 2018-03-13 三星电子株式会社 飞行时间测量装置和减少其中深度图像模糊的图像处理方法
US20180089847A1 (en) * 2016-09-23 2018-03-29 Samsung Electronics Co., Ltd. Time-of-flight (tof) capturing apparatus and image processing method of reducing distortion of depth caused by multiple reflection
WO2019164232A1 (ko) * 2018-02-20 2019-08-29 삼성전자주식회사 전자 장치, 이의 영상 처리 방법 및 컴퓨터 판독가능 기록 매체
CN110632614A (zh) * 2018-06-21 2019-12-31 美国亚德诺半导体公司 测量和去除由内部散射引起的飞行时间深度图像的破坏
CN110688763A (zh) * 2019-10-08 2020-01-14 北京工业大学 一种基于脉冲型ToF相机深度和光强图像的多径效应补偿方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4138030A4 *

Also Published As

Publication number Publication date
CN113808024A (zh) 2021-12-17
EP4138030A4 (en) 2023-09-06
CN115917587A (zh) 2023-04-04
US20230196514A1 (en) 2023-06-22
JP2023527833A (ja) 2023-06-30
EP4138030A1 (en) 2023-02-22

Similar Documents

Publication Publication Date Title
US10997696B2 (en) Image processing method, apparatus and device
US10504242B2 (en) Method and device for calibrating dual fisheye lens panoramic camera, and storage medium and terminal thereof
US10205896B2 (en) Automatic lens flare detection and correction for light-field images
US9444991B2 (en) Robust layered light-field rendering
WO2019148978A1 (zh) 图像处理方法、装置、存储介质及电子设备
CN110691193B (zh) 摄像头切换方法、装置、存储介质及电子设备
US10013764B2 (en) Local adaptive histogram equalization
US10805508B2 (en) Image processing method, and device
US20160029017A1 (en) Calibration of light-field camera geometry via robust fitting
US8405742B2 (en) Processing images having different focus
JP2020536457A (ja) 画像処理方法および装置、電子機器、ならびにコンピュータ可読記憶媒体
CN107851311B (zh) 对比度增强的结合图像生成系统和方法
WO2018210318A1 (zh) 图像虚化处理方法、装置、存储介质及电子设备
CN105721853A (zh) 用于深度图生成的数码相机的配置设置
WO2018210308A1 (zh) 图像虚化处理方法、装置、存储介质及电子设备
US20190355101A1 (en) Image refocusing
WO2021212435A1 (zh) 红外图像处理方法、装置、可移动平台与计算机可读介质
CN117061868A (zh) 一种基于图像识别的自动拍照装置
US20240022702A1 (en) Foldable electronic device for multi-view image capture
WO2021239029A1 (zh) 幻影反射补偿方法及设备
CN117058183A (zh) 一种基于双摄像头的图像处理方法、装置、电子设备及存储介质
JP6739955B2 (ja) 画像処理装置、画像処理方法、画像処理プログラム、および記録媒体
WO2018161322A1 (zh) 基于深度的图像处理方法、处理装置和电子装置
CN113947686A (zh) 一种图像的特征点提取阈值动态调整方法和系统
AU2018204554A1 (en) Method, system and apparatus for determining velocity of an object in a scene

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21812623

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021812623

Country of ref document: EP

Effective date: 20221117

ENP Entry into the national phase

Ref document number: 2022572752

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE