WO2019153920A1 - 一种图像处理的方法以及相关设备 - Google Patents

一种图像处理的方法以及相关设备 Download PDF

Info

Publication number
WO2019153920A1
WO2019153920A1 PCT/CN2018/123383 CN2018123383W WO2019153920A1 WO 2019153920 A1 WO2019153920 A1 WO 2019153920A1 CN 2018123383 W CN2018123383 W CN 2018123383W WO 2019153920 A1 WO2019153920 A1 WO 2019153920A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
color
texture
information
brightness
Prior art date
Application number
PCT/CN2018/123383
Other languages
English (en)
French (fr)
Inventor
骆立俊
朱力于
阙步军
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2020542815A priority Critical patent/JP6967160B2/ja
Priority to EP18905692.2A priority patent/EP3734552A4/en
Publication of WO2019153920A1 publication Critical patent/WO2019153920A1/zh
Priority to US16/943,497 priority patent/US11250550B2/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of images, and in particular, a method for image processing and related devices are designed.
  • the camera device can capture clear images, while in low illumination, the captured images are often unclear, so the image sharpness under low illumination has always been the camera device. The problem that needs to be improved.
  • the light in the optical imaging system, can be separated according to the wavelength band and the ratio by the light splitting device, and the respective frequency components obtained by the separation are respectively imaged to obtain a visible light image and an infrared light image, wherein the visible light image is a color image, and the infrared image is obtained.
  • the light image is an achromatic image.
  • the visible image and the infrared light image are image-fused by a preset fusion algorithm, and the obtained image can be merged with the image of each frequency component on the infrared light image to obtain the merged target image.
  • the color component of the target image is derived from the light image, and after determining the brightness and texture of the target image, the color component is fused to obtain the target image.
  • the infrared light image and the visible light image have different brightness distributions, the reflection coefficients of different materials in visible light and infrared light are different, so the brightness difference between the infrared light image and the visible light image is obvious, especially under low illumination, infrared
  • the texture distribution and brightness distribution of the light image and the visible light image are quite different.
  • the infrared light image is more clear than the visible light image, and the texture of the infrared light image is richer. Therefore, the texture information under the infrared light image will occupy a larger proportion when merging the image. Therefore, the merged target image will be closer to the image texture under infrared light, and the actual texture difference from the image is large, resulting in more severe distortion.
  • the embodiment of the present application provides a method for image processing and related equipment for processing an image acquired by an optical imaging system, and performing contrast, texture, and color processing on the image, especially in a low illumination scene.
  • the image texture is sharper and the texture and color are closer to the actual texture and color.
  • the first aspect of the present application provides a method for image processing, including:
  • Obtaining a visible light image and an infrared light image acquiring first brightness information and second brightness information, wherein the first brightness information is brightness information of the visible light image, and the second brightness information is brightness information of the infrared light image;
  • the brightness information is fused with the second brightness information to obtain a contrast fused image;
  • the first texture information and the second texture information are obtained, the first texture information is texture information of the visible light image, and the second texture information is the infrared light
  • the texture information of the image; the first texture information and the second texture information are merged with the contrast fusion image to obtain a texture fusion image; and the color fusion image is obtained according to the visible light image and the infrared light image;
  • the image is fused with the color fused image to obtain a target image.
  • the brightness information can reduce the noise in the contrast fused image, and can make the brightness distribution in the contrast fused image more uniform, closer to the brightness distribution under visible light, and then extract the first texture information from the visible light image and extract the first texture information from the infrared light image.
  • the second texture information, and then the first texture information, the second texture information and the contrast fusion image are merged to obtain a texture fusion image, which can make the texture in the texture fusion image clearer, and can perform color through the infrared light image and the visible light image.
  • Fusion to obtain color fusion image, adding infrared light image as the basis of color fusion image, can reduce color missing, color cast or noise, etc. Finally, the color fusion image and texture fusion image are merged to obtain the target image. Can reduce the target map The noise of the image makes the temperature of the target image clearer, and the brightness distribution is closer to the brightness distribution under visible light.
  • the acquiring a color fusion image according to the visible light image and the infrared light image may include:
  • Color-aware restoration of the visible light image to obtain a color-aware restored image comprising: performing color reasoning on the infrared light image according to a preset color correspondence relationship to obtain a color inference image; and performing the color-sensing restored image and the color-inferential image Fusion to get the color fusion image.
  • the color image can be restored by color perception, and the color under the visible light image can be perceived and restored, and the partially missing color can be restored, because the color component in the infrared light image and the visible light
  • the color components have a corresponding relationship, and the infrared light image can be color-inferred according to the preset color pair relationship to obtain a color inference image, and then the color perception restored image and the color inference image are combined to obtain a color fusion image, which can pass color
  • the color component in the inference image fills the missing part of the color under the visible light, the partial color part or the noise part, makes the color of the color fusion image more complete, reduces the noise in the color fusion image, and further reduces the color noise in the target image. Improve color loss or color cast.
  • the first brightness information and the second brightness information are merged to Obtaining a contrast fusion image, which can include:
  • the first brightness information and the second brightness information are calculated by a preset first formula to obtain a target brightness value; the contrast fusion image is obtained by the target brightness value.
  • the first brightness information and the second brightness information may be calculated by using a preset first formula, and a manner of obtaining a contrast fusion image is added.
  • the merging the first texture information and the second texture information with the contrast fused image to obtain a texture fused image may include:
  • the first texture information and the second texture information are calculated by a preset second formula to obtain a target texture pixel value; the target texture pixel value is superimposed into the contrast fusion image to obtain the texture fusion image.
  • the first texture information and the second texture information may be calculated by using a preset second formula, and a manner of obtaining a texture fusion image is added.
  • the infrared light image is color-inferred according to a preset color correspondence relationship to obtain a color inference image.
  • the infrared light image is determined according to a preset color correspondence relationship; the target color is determined according to a preset calculation manner according to a ratio of the color component to obtain the color inference image.
  • the specific process of obtaining the color inference image may be to obtain a color image by using a color component in the infrared light image according to a preset color correspondence relationship to obtain a color image, thereby adding a method for obtaining a color inference image.
  • the visible light image is subjected to color perception restoration, Obtaining a color-aware restored image can include:
  • Inverting the brightness of the visible light image to obtain a brightness inversion image Inverting the brightness of the visible light image to obtain a brightness inversion image; calculating the brightness inversion image according to a fogging algorithm to obtain an enhanced image after brightness and color enhancement; and inverting the enhanced image to obtain This color perception restores the image.
  • the brightness of the visible light image may be reversed, and then the inverted visible light image is calculated by a fog-transmission algorithm to obtain an image with enhanced brightness and color, and then the brightness and the color are enhanced.
  • the image is then inverted to obtain a color-aware image with enhanced color and brightness.
  • the merging the texture fused image and the color fused image to obtain the target image may include:
  • the luminance information in the texture fusion image is blended with the color component in the color fusion image to obtain the target image.
  • the color information in the texture fusion image and the color component in the color fusion image are determined, and then the luminance information is superimposed with the color component or scaled to obtain the target image, so that the color of the target image can be more complete. It can improve the color cast in the target image, the noise is large, the texture is not clear, and the brightness distribution is greatly different from that under visible light.
  • a second aspect of the present application provides an image processing apparatus having a function of realizing a method corresponding to the image processing in the first aspect or the first aspect of the present application described above.
  • the functions may be implemented by hardware or by corresponding software implemented by hardware.
  • the hardware or software includes one or more modules corresponding to the functions described above.
  • the third party of the present application provides a camera device, which may include:
  • a lens for acquiring an optical image
  • the memory for storing program code
  • the processor executing the first aspect or the first application when calling the program code in the memory
  • a fourth aspect of the present disclosure provides a terminal device, including:
  • a lens for acquiring an optical image
  • the memory for storing program code
  • the processor executing the first aspect or the first application when calling the program code in the memory
  • the fifth aspect of the present application provides a storage medium. It should be noted that the technical solution of the present invention or the part that contributes to the prior art or all or part of the technical solution may be embodied in the form of a software product.
  • the computer software product is stored in a storage medium for storing computer software instructions for use in the above apparatus, comprising programs for performing the first aspect described above.
  • the storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like, which can store program codes.
  • a sixth aspect of the embodiments of the present application provides a computer program product comprising instructions, which when executed on a computer, cause the computer to perform the method as described in the first aspect of the present application or any of the alternative embodiments of the first aspect.
  • a seventh aspect of the present application provides a chip system including a processor for supporting an image processing apparatus to implement the functions involved in the above first aspect, such as transmitting or processing data involved in the above method and/or information.
  • the chip system further includes a memory for storing program instructions and data necessary in the method of image processing.
  • the chip system can be composed of chips, and can also include chips and other discrete devices.
  • the embodiments of the present application have the following advantages:
  • the contrast fusion image after the contrast enhancement is obtained by using the brightness information of the light image and the infrared light image, and then passing through the visible light image and the infrared when performing texture fusion
  • the texture information of the light image is texture-fused to obtain a texture-fused image with a clearer texture, and then the color fusion image is acquired through the infrared light image and the visible light image.
  • the reference embodiment of the present application adds the reference visible light image and the infrared light image.
  • the contrast is texture-fused, so that the texture of the texture is more clear, and the brightness and texture of the target image are closer to the brightness and texture under actual visible light, reducing the distortion of the image.
  • FIG. 1 is a frame diagram of a method for image processing in an embodiment of the present application
  • FIG. 2 is a schematic diagram of an image synthesized in a conventional scheme
  • FIG. 3 is a schematic diagram of an embodiment of a method for image processing in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of another embodiment of a method for image processing in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another embodiment of a method for image processing in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of an embodiment of an image processing apparatus according to an embodiment of the present application.
  • FIG. 7 is a schematic diagram of an embodiment of an image pickup apparatus according to an embodiment of the present application.
  • the embodiment of the present application provides a method for image processing and related equipment for processing an image acquired by an optical imaging system, and performing contrast, texture, and color processing on the image, especially in a low illumination scene.
  • the image texture is sharper and the texture and color are closer to the actual texture and color.
  • IR-CUT Filter filters the infrared light in the surrounding environment of the monitoring device, making the infrared light in the environment ineffective.
  • the use of the ground reduces the overall amount of light passing through the obtained image.
  • the visible light image and the infrared light image are directly fused by the fusion algorithm, so that the frequency of the noise and the frequency range of the image detail are small, so the synthesized image cannot distinguish between noise and image details, resulting in image noise after synthesis.
  • the embodiment of the present application provides a method for image processing.
  • the low illumination scene described in the embodiment of the present application is a scene with illumination below a threshold
  • the threshold for low illumination may be adjusted according to characteristics of devices in the actual optical imaging system, including sensors or optical splitters, for example, If the characteristics of the device are good, the threshold can be lowered, and if the characteristics of the device are low, the threshold can be increased.
  • the framework of the image processing method in the embodiment of the present application is as shown in FIG. 1 , wherein the visible light image and the infrared light image are acquired by the optical imaging system, and the optical imaging system may be a camera of the monitoring device, or may be a terminal device or The camera of the camera then combines the visible light image with the infrared light image to obtain a target image, and respectively combines the brightness information, the texture information and the color information of the light image with the infrared light image to obtain a clear target image. And the texture and color of the target image can be made closer to the actual image color.
  • the image processing method provided by the embodiment of the present application can make the target image closer to the actual texture and color.
  • the image obtained in the existing scheme is as shown in FIG. 2, wherein in the case of low illumination, due to the existing scheme In the image fusion, the texture of the infrared light image is clearer than the brightness and texture of the visible light image. Therefore, the brightness of the infrared light image is larger than the brightness of the visible light image, resulting in the brightness of the merged image and the brightness under the actual visible light.
  • the difference from the texture is large. For example, the brightness of the synthesized "tree" in the image of Fig. 2 is too bright, and the difference between the brightness of the "tree" under actual visible light is large.
  • the embodiment of the present application obtains a clearer texture fusion image by separately acquiring luminance information and texture information from the visible light image and the infrared light image, and obtains a color fusion image by using the color information of the visible light image and the infrared light image and the above texture.
  • the fused image is combined with the target image to enhance the color of the image.
  • FIG. 3 including:
  • the visible light image and the infrared light image can be obtained by an optical imaging system.
  • the visible light image and the infrared light image can be obtained by a camera of a monitoring device, or can be obtained by a single camera or a plurality of cameras of the mobile phone.
  • the first brightness information is brightness information in the visible light image
  • the second brightness information is brightness information in the infrared light image
  • the first brightness information may include brightness values of respective pixels in the visible light image
  • the second brightness information may include brightness values of respective pixels in the infrared light image.
  • the first brightness information is merged with the second brightness information to obtain a contrast fusion image.
  • the specific fusion mode may be that the brightness value in the first brightness information is proportional to the brightness value in the second brightness information.
  • the brightness value and the infrared light image in the visible light image can be calculated according to a preset formula.
  • the ratio of the brightness value for example, is calculated to be 320 nits.
  • the texture of the infrared light image is generally clearer than the texture of the visible light image, so the infrared light image may be added when the image is synthesized. The proportion of the texture.
  • the execution sequence of the step 302 and the step 304 is not limited, and the step 302 may be performed first, or the step 304 may be performed first, which is not limited herein.
  • the pixel values of all the textures in the visible light image and the infrared light image may be obtained, and the pixel values in the first texture information and the pixel values in the second texture information may be calculated.
  • the target texture pixel value is obtained, and then the target texture pixel value is superimposed into the contrast fusion image to obtain a texture fusion image, which can make the texture fusion image texture to be clearer.
  • texture details in visible light images are lost more, texture details in infrared light images are more abundant than texture details in visible light images, and infrared light images have less noise than visible light images, so
  • the proportion of richer texture information in the infrared light image is increased, so that the texture fused image has a clearer texture and less noise.
  • the execution sequence of the step 302 and the step 305 is not limited, and the step 302 may be performed first, or the step 305 may be performed first, which is not limited herein.
  • the color information is obtained from the visible light image and the infrared light image respectively, and the visible light image can be restored by color perception to obtain the color information of the visible light image, and the infrared light image is subjected to color reasoning learning according to the preset color correspondence relationship, and the color of the infrared light image is obtained.
  • the color information of the visible light image and the color information of the infrared light image are calculated to obtain color components of respective pixel points in the color fusion image to obtain the color fusion image.
  • the color particles in the visible light image are large in noise and the color distortion is severe, and the color fusion image obtained by fusing the color information obtained from the infrared light image inference and the color information obtained from the perceived restoration in the visible light image is obtained.
  • the color noise is lower and the color is closer to the actual visible light.
  • the target image is obtained by fusing the texture fusion image and the color fusion image.
  • the luminance component of the target image can be obtained according to the texture fusion image, the color component of the target image is obtained according to the color fusion image, and then the luminance component and the color component are combined to obtain a target image.
  • the brightness information is obtained from the visible light image and the infrared light image respectively, and the contrast fusion image is obtained according to the brightness information, and then obtained according to the visible light image and the infrared light image.
  • the obtained texture information is fused with the contrast fused image to obtain a texture fused image.
  • the texture in the texture fused image is clearer, and the brightness distribution is closer to the brightness distribution of the actual visible light.
  • a color fusion image is obtained, and the color missing from the infrared light image can be used to fill the missing color in the visible light image, so that the obtained color fusion image can include the complete color color. Therefore, the texture of the target image obtained by the color fusion image and the texture fusion image is clearer, the brightness distribution is closer to the brightness of the actual illumination, the color in the target image is more complete, and the target image due to the color loss of the visible image is reduced. The color is missing.
  • the target image is fused, the luminance and texture information in the infrared light image and the visible light image are respectively fused, and the noise in the synthesized target image can be reduced.
  • FIG. 4 Another embodiment of the image processing method in the embodiment of the present application is shown.
  • the visible light image 401 and the infrared light image 402 are respectively acquired by the optical imaging system, and then the visible light image 401 passes through the noise reducer 1 to filter out part of the noise in the visible light image, for example, particle noise, and the infrared light image 402 passes through the noise reducer. 2. Filter out part of the noise in the visible image.
  • the noise reduction 1 and the noise reducer 2 may be an image signal processing (ISP) noise reducer, and the ISP noise reducer may perform image processing on the infrared light image and the visible light image, including exposure control, white balance control, and lowering. Noise and so on.
  • ISP image signal processing
  • the image After being processed by the ISP noise reducer 1 and the ISP noise reducer 2, the image is a YUV (luminance signal Y and chrominance signal U, V) format image with accurate color and luminance distribution. That is, the luminance component of the visible light image and the luminance component of the infrared light image can be acquired.
  • YUV luminance signal Y and chrominance signal U, V
  • the optical imaging system may be a camera or a plurality of cameras. Here, only one camera is taken as an example.
  • the lens may include a multi-layer lens, and the image is first collected by the lens, and then passed through.
  • the dichroic prism splits, and the sensor 1 generates a visible light image 401, and the sensor 2 generates an infrared light image 402.
  • the optical imaging system can also directly generate a visible light image and an infrared light image by a separate imaging device, which can be adjusted according to actual design requirements, which is not limited herein.
  • the specific fusion process may be: separately calculating the local contrast in the visible light image and the corresponding local contrast in the infrared light image, and then calculating the local contrast in the visible light image and the corresponding local contrast in the infrared light image according to the preset gradient features.
  • the weight of the component may be, for example, taking a corresponding part of the infrared light image and the visible light image as an example.
  • the synthesis is performed. Contrast fusion image is more inclined to the local contrast in the infrared light image, that is, the local contrast of the infrared light image and the local contrast of the visible light image are different from the preset gradient features, the weight of the local contrast in the infrared light image Larger, the local contrast in the infrared image will be more used as the local contrast of the contrast fused image.
  • the specific contrast fusion process may be an example of a corresponding portion in the visible light image and the infrared light image
  • the local portion may be a pixel matrix, for example, a 6*6 pixel matrix.
  • the matrix, the second weight matrix may be preset, or may be calculated from actual data.
  • the first formula may be Where p is the brightness value of the visible light image, W is the preset fixed matrix, Q is the brightness value of the infrared light image, and ⁇ is the preset coefficient, which can be adjusted according to actual needs, and s i is the brightness value of the pixel point i,
  • the texture fused image 406 by the contrast fused image. Extracting first texture information from the visible light image after filtering part of the noise, and extracting second texture information from the infrared light image after filtering the partial noise, and then the first texture information and the pixels included in the second texture information The value is calculated and superimposed, and the first texture information and the pixel value included in the second texture information are superimposed into the contrast fused image to obtain a texture fused image.
  • the specific process may be: calculating details in the visible light image and details in the infrared light image, and then calculating an optimal pixel value of each detail texture according to a preset formula, that is, a target pixel value, and then each detail is
  • the texture pixel 406 is obtained by superimposing the optimal pixel values of the texture into the contrast fused image.
  • the visible image portion may include: acquiring a current visible pixel point x ⁇ ( ⁇ *) , and a value of a pixel point of the non-local average filtered visible light, o, b , and subtracting
  • the visible light texture detail ⁇ x ⁇ ( ⁇ *) x ⁇ ( ⁇ *) - x o, b in the visible light image
  • the infrared light image portion may include: obtaining the pixel value x n of the non-local average filtered infrared light, and the current infrared light
  • the second formula can be: Where ⁇ d is a preset coefficient, which can be adjusted according to actual needs, f j is a preset local weight matrix, and the calculated image pixel values are superimposed into the contrast fusion image to obtain pixel points in the texture fusion image.
  • the value is x ⁇ ( ⁇ *) ⁇ x o,b + ⁇ x.
  • the noise-reduced visible light image and the noise-reduced infrared light image are color-fused to obtain a color fused image 409.
  • the specific process of performing color fusion may be: performing color perception recovery on the visible light image after filtering part of the noise to obtain the color perception restored image 407
  • the specific process of performing color perception restoration may be: performing brightness reduction after filtering part of the noise Inverting, obtaining a visible light image after brightness inversion, and then enhancing the brightness and color of the visible light image after the brightness is inverted by a fog-transmission algorithm, and then inverting the inverted image with enhanced brightness and color to obtain brightness Visible light image with color enhancement.
  • calculating a proportional relationship of gray values of respective adjacent pixel points in the visible light image after the brightness is inverted and then correcting the gray value of each pixel point by the proportional relationship, and then correcting the gray value of the corrected pixel point.
  • the linear enhancement is performed to obtain an enhanced inverted image, and the inverted image is inverted to obtain the visible light image with the brightness and color enhancement.
  • the color component in the infrared light image has a corresponding relationship with the color component in the visible light image, and the correspondence relationship may be preset, or RGB (red, green, blue) in the infrared light image may be obtained through a large amount of data and machine learning. Correspondence between the red, blue, and green components and the colors in the visible light image. Therefore, the color component in the infrared light image can be inferred by the correspondence to obtain an image corresponding to the color of the visible light image to obtain a color inference image 408, which can be used to infer the missing color or color cast in the visible light. The part is corrected to get an image with a color closer to the actual lighting.
  • the order of obtaining the color perception restored image 407 and the color inference image 408 is not limited in this embodiment, and the color perception restored image 407 may be acquired first, or the color inference image 408 may be acquired first, which may be specifically adjusted according to actual needs. This is not a limitation.
  • the color perception restored image 407 can be blended with the color inference image 408 to obtain a color blended image 409.
  • the color correction may be determined by referring to the brightness value of the visible light image. If the brightness of the part in the color perception restored image is too low or the noise is too large, the reference proportion of the corresponding part in the color inference image may be improved, ie The color component of the corresponding portion of the color inference image can be used to correct the color of the portion of the brightness that is too low or too noisy to obtain a more complete color fusion image. Therefore, using the visible light image together with the infrared light image to determine the color of the target image can improve the color noise, color distortion, and contamination of the target image in a low illumination scene.
  • the order of obtaining the texture fused image 406 and the color fused image 409 is not limited, and the texture fused image 406 may be acquired first, or the color fused image 409 may be acquired first, which may be adjusted according to actual needs, specifically Not limited.
  • the texture fused image 406 and the color fused image 409 are obtained, the texture fused image and the color fused image are fused, and the texture details in the texture fused image and the color components in the color fused image are superimposed and combined to obtain the target image 410.
  • the brightness information is obtained from the visible light image and the infrared light image respectively, and the contrast fusion image is obtained according to the brightness information, and then according to the texture information acquired from the visible light image and the infrared light image and the contrast fusion image.
  • the texture fusion image is obtained by fusion, and the texture in the texture fusion image is clearer, and the brightness distribution is closer to the brightness distribution of the actual visible light.
  • a color fusion image is obtained, and the color missing from the infrared light image can be used to fill the missing color in the visible light image, so that the obtained color fusion image can include the complete color color.
  • the texture of the target image obtained by the color fusion image and the texture fusion image is clearer, and the brightness distribution is closer to the brightness of the actual illumination, so that the color of the target image is more complete, and the color of the target image caused by the lack of color of the visible image is reduced. Missing.
  • the target image is fused, the luminance and texture information in the infrared light image and the visible light image are respectively fused, the noise in the synthesized target image can be reduced, and the color loss or color cast of the synthesized image can be reduced. Improves the problem of color noise, color distortion and contamination of the target image under low illumination.
  • FIG. 6 a schematic diagram of an embodiment of an image processing device in the embodiment of the present application may be used. include:
  • the brightness information acquiring module 602 is further configured to acquire the first brightness information and the second brightness information, where the first brightness information is brightness information of the visible light image, and the second brightness information is brightness information of the infrared light image;
  • the contrast fusion module 603 is configured to fuse the first brightness information with the second brightness information to obtain a contrast fusion image
  • the texture information obtaining module 604 is further configured to obtain the first texture information and the second texture information, where the first texture information is texture information of the visible light image, and the second texture information is texture information of the infrared light image.
  • a texture fusion module 605 configured to fuse the first texture information, the second texture information, and the contrast fusion image to obtain a texture fusion image
  • a color fusion module 606 configured to acquire a color fusion image according to the visible light image and the infrared light image
  • the target synthesis module 607 is configured to obtain the target image by fusing the texture fusion image and the color fusion image.
  • the color fusion module 606 may include:
  • a perceptual complex atom module 6061 configured to perform color-aware restoration on the visible light image to obtain a color-aware restored image
  • a color inference sub-module 6062 configured to perform color reasoning on the infrared light image according to a preset color correspondence relationship to obtain a color inference image
  • the color blending sub-module 6063 is configured to fuse the color-aware restored image with the color-inferred image to obtain a color-fused image.
  • the contrast fusion module 603 is specifically configured to:
  • a contrast fused image is obtained from the target luminance value.
  • the texture fusion module 605 is specifically configured to:
  • the target texture pixel values are superimposed into the contrast fused image to obtain a texture fused image.
  • the color inference sub-module 6062 is specifically configured to:
  • the target color is determined according to a preset calculation manner according to the ratio of the color components to obtain a color inference image.
  • the complex atomic module 6061 is specifically configured to:
  • the enhanced image is inverted to obtain a color-aware restored image.
  • the target image synthesis module 607 is specifically configured to:
  • the luminance information in the texture fused image is fused with the color component in the color fused image to obtain a target image.
  • the image processing device when the image processing device is a chip in the terminal, the chip includes: a processing unit and a communication unit, and the processing unit may be, for example, a processor, and the communication unit may be, for example, an input/output. Interface, pin or circuit.
  • the processing unit may execute computer execution instructions stored by the storage unit to cause the chip within the terminal to perform the wireless communication method of any of the above aspects.
  • the storage unit is a storage unit in the chip, such as a register, a cache, etc., and the storage unit may also be a storage unit located outside the chip in the terminal, such as a read-only memory (read) -only memory, ROM) or other types of static storage devices, random access memory (RAM), etc. that can store static information and instructions.
  • the processor mentioned in any of the above may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more for controlling the above.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the integrated circuit of the program execution of the first aspect wireless communication method may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more for controlling the above.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • the embodiment of the present invention further provides an image capturing apparatus.
  • FIG. 7 for the convenience of description, only parts related to the embodiment of the present invention are shown. For details of the technical disclosure, please refer to the method part of the embodiment of the present invention.
  • the camera device can be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), a car computer, and the like:
  • FIG. 7 is a block diagram showing a part of the structure of an image pickup apparatus according to an embodiment of the present invention.
  • the camera device includes: a radio frequency (RF) circuit 710, a memory 720, an input unit 730, a display unit 740, a sensor 750, an audio circuit 760, a lens 770, a processor 780, and a power source 790.
  • RF radio frequency
  • FIG. 7 does not constitute a limitation of the camera device, and may include more or less components than those illustrated, or some components may be combined, or different component arrangements.
  • the RF circuit 710 can be used for transmitting and receiving information or during a call, and receiving and transmitting the signal. Specifically, after receiving the downlink information of the base station, the processor 780 processes the data. In addition, the uplink data is designed to be sent to the base station.
  • RF circuit 710 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • LNA Low Noise Amplifier
  • RF circuitry 710 can also communicate with the network and other devices via wireless communication. The above wireless communication may use any communication standard or protocol, including but not limited to Global System of Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division). Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), and the like.
  • GSM Global System of Mobile communication
  • the memory 720 can be used to store software programs and modules, and the processor 780 executes various functional applications and data processing of the camera by running software programs and modules stored in the memory 720.
  • the memory 720 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may be stored according to Data created by the use of the camera (such as audio data, phone book, etc.).
  • memory 720 can include high speed random access memory, and can also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
  • the input unit 730 can be configured to receive input digital or character information and to generate key signal inputs related to user settings and function control of the camera.
  • the input unit 730 may include a touch panel 731 and other input devices 732.
  • the touch panel 731 also referred to as a touch screen, can collect touch operations on or near the user (such as the user using a finger, a stylus, or the like on the touch panel 731 or near the touch panel 731. Operation), and drive the corresponding connecting device according to a preset program.
  • the touch panel 731 can include two parts: a touch detection device and a touch controller.
  • the touch detection device detects the touch orientation of the user, and detects a signal brought by the touch operation, and transmits the signal to the touch controller; the touch controller receives the touch information from the touch detection device, converts the touch information into contact coordinates, and sends the touch information.
  • the processor 780 is provided and can receive commands from the processor 780 and execute them.
  • the touch panel 731 can be implemented in various types such as resistive, capacitive, infrared, and surface acoustic waves.
  • the input unit 730 may also include other input devices 732.
  • other input devices 732 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control buttons, switch buttons, etc.), trackballs, mice, joysticks, and the like.
  • the display unit 740 can be used to display information input by the user or information provided to the user and various menus of the camera.
  • the display unit 740 can include a display panel 741.
  • the display panel 741 can be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 731 can cover the display panel 741. When the touch panel 731 detects a touch operation on or near the touch panel 731, it transmits to the processor 780 to determine the type of the touch event, and then the processor 780 according to the touch event. The type provides a corresponding visual output on display panel 741.
  • the touch panel 731 and the display panel 741 are two independent components to implement the input and input functions of the image pickup apparatus, in some embodiments, the touch panel 731 may be integrated with the display panel 741. The input and output functions of the camera device are realized.
  • the camera device may also include at least one type of sensor 750, such as a light sensor, motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor, wherein the ambient light sensor may adjust the brightness of the display panel 741 according to the brightness of the ambient light, and the proximity sensor may close the display panel 741 when the camera moves to the ear / or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in all directions (usually three axes). When it is stationary, it can detect the magnitude and direction of gravity.
  • attitude of the camera such as horizontal and vertical screen switching, Related games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tapping), etc.; as for the gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors that can be configured in the camera device, here No longer.
  • An audio circuit 760, a speaker 761, and a microphone 762 can provide an audio interface between the user and the camera.
  • the audio circuit 760 can transmit the converted electrical data of the received audio data to the speaker 761 for conversion to the sound signal output by the speaker 761; on the other hand, the microphone 762 converts the collected sound signal into an electrical signal by the audio circuit 760. After receiving, it is converted into audio data, and then processed by the audio data output processor 780, transmitted to the, for example, another camera device via the RF circuit 710, or outputted to the memory 720 for further processing.
  • the lens 770 in the camera device can acquire an optical image, including an infrared light image and/or a visible light image, wherein the lens in the camera device can be one or at least two (not shown), depending on the actual Design needs adjustment.
  • the processor 780 is a control center of the camera device that connects various portions of the entire camera device using various interfaces and lines, by running or executing software programs and/or modules stored in the memory 720, and recalling data stored in the memory 720.
  • the camera performs various functions and processing data to perform overall monitoring of the camera.
  • the processor 780 may include one or more processing units; preferably, the processor 780 may integrate an application processor and a modem processor, where the application processor mainly processes an operating system, a user interface, an application, and the like.
  • the modem processor primarily handles wireless communications. It will be appreciated that the above described modem processor may also not be integrated into the processor 780.
  • the camera device also includes a power source 790 (such as a battery) that supplies power to various components.
  • a power source 790 such as a battery
  • the power source can be logically coupled to the processor 780 through a power management system to manage functions such as charging, discharging, and power management through the power management system.
  • the camera device may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 780 included in the camera device further has the following functions:
  • Obtaining a visible light image and an infrared light image acquiring first brightness information and second brightness information, wherein the first brightness information is brightness information of the visible light image, and the second brightness information is brightness information of the infrared light image;
  • the brightness information is fused with the second brightness information to obtain a contrast fused image;
  • the first texture information and the second texture information are obtained, the first texture information is texture information of the visible light image, and the second texture information is the infrared light
  • the texture information of the image; the first texture information and the second texture information are merged with the contrast fusion image to obtain a texture fusion image; and the color fusion image is obtained according to the visible light image and the infrared light image;
  • the image is fused with the color fused image to obtain a target image.
  • the terminal device provided by the present application may be a mobile phone, a video camera, a monitor or a tablet computer, etc., and the terminal device may further include one or more lenses, which are similar to the camera device shown in FIG. 7 above, and specifically Let me repeat.
  • the disclosed system, apparatus, and method may be implemented in other manners.
  • the device embodiments described above are merely illustrative.
  • the division of the unit is only a logical function division.
  • there may be another division manner for example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not executed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in an electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated unit if implemented in the form of a software functional unit and sold or used as a standalone product, may be stored in a computer readable storage medium.
  • a computer readable storage medium A number of instructions are included to cause a computer device (which may be a personal computer, server, or network device, etc.) to perform all or part of the steps of the methods described in connection with Figures 3 through 5 of various embodiments of the present application.
  • the foregoing storage medium includes: a U disk, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk, and the like. .

Abstract

本申请实施例提供了一种图像处理的方法以及相关设备,用于处理光学成像系统获取到的图像,对该图像进行对比度、纹理以及色彩的处理,使得到的图像纹理更清晰,且纹理与色彩更接近实际的纹理与色彩。该图像处理的方法包括:获取可见光图像与红外光图像;获取可见光图像的第一亮度信息与红外光图像的第二亮度信息;将该第一亮度信息与该第二亮度信息进行融合,以得到对比度融合图像;获取可见光图像的第一纹理信息与红外光图像的第二纹理信息;将该第一纹理信息、该第二纹理信息与该对比度融合图像进行融合,以得到纹理融合图像;根据可见光图像与红外光图像得到色彩融合图像;通过对纹理融合图像与色彩融合图像进行融合,以得到目标图像。

Description

一种图像处理的方法以及相关设备
本申请要求于2018年2月9日提交中国国家知识产权局、申请号为201810135739.1、发明名称为“一种图像处理的方法以及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及图像领域,特别设计一种图像处理的方法以及相关设备。
背景技术
随着摄像技术的发展,在高照度下,摄像设备能够拍摄到清晰的图像,而在低照度下,拍摄的图像往往是不清晰的,因此提高在低照度下的图像清晰度一直是摄像设备所亟待提升的问题。
现有方案中,光学成像系统中可以通过分光装置将光线按照波段以及比例进行分离,并通过分离得到的各个频率分量分别成像,得到可见光图像与红外光图像,其中,可见光图像为彩色图像,红外光图像为非彩色图像。然后通过预置的融合算法对可见光图像与红外光图像进行图像融合,将得到的可将光图像与红外光图像上的各个频率分量的成像进行融合,以得到融合后的目标图像。其中,目标图像的色彩分量来自于可将光图像,在确定目标图像的亮度与纹理后,根据该色彩分量进行融合,得到目标图像。
由于红外光图像与可见光图像在亮度分布上有较大不同,不同材质的物体在可见光与红外光下的反光系数不同,因此红外光图像与可见光图像的亮度差异明显,尤其在低照度下,红外光图像与可见光图像的纹理分布以及亮度分布差异较大,通常红外光图像较可见光图像更清晰,红外光图像的纹理更丰富,因此红外光图像下的纹理信息将在融合图像时占用较大比例,因此将造成融合后的目标图像更接近于红外光下的图像纹理,而与图像的实际纹理差异大,产生较严重的失真。
发明内容
本申请实施例提供了一种图像处理的方法以及相关设备,用于处理光学成像系统获取到的图像,对该图像进行对比度、纹理以及色彩的处理,特别在低照度的场景下,使得到的图像纹理更清晰,且纹理与色彩更接近实际的纹理与色彩。
有鉴于此,本申请第一方面提供一种图像处理的方法,包括:
获取可见光图像与红外光图像;获取第一亮度信息与第二亮度信息,该第一亮度信息为该可见光图像的亮度信息,该第二亮度信息为该红外光图像的亮度信息;将该第一亮度信息与该第二亮度信息进行融合,以得到对比度融合图像;获取第一纹理信息与第二纹理信息,该第一纹理信息为该可见光图像的纹理信息,该第二纹理信息为该红外光图像的纹理信息;将该第一纹理信息、该第二纹理信息与该对比度融合图像进行融合,以得到纹理融合图像;根据该可见光图像与该红外光图像获取色彩融合图像;通过对该纹理融合图像与该色彩融合图像进行融合,以得到目标图像。
在本申请实施方式中,首先从可见光图像中获取第一亮度信息以及从红外光图像中获取第二亮度信息,将该第一亮度信息与第二亮度信息进行融合得到对比度融合图像,通过 分别提取亮度信息,可以降低对比度融合图像中的噪声,可以使对比度融合图像中的亮度分布更均匀,更接近可见光下的亮度分布,之后从可见光图像中提取第一纹理信息以及从红外光图像中提取第二纹理信息,然后将第一纹理信息、第二纹理信息与对比度融合图像进行融合得到纹理融合图像,可以使得到的纹理融合图像中的纹理更清晰,且可以通过红外光图像与可见光图像进行色彩融合,以得到色彩融合图像,增加了红外光图像作为色彩融合图像的基础,可减少色彩缺失,偏色或噪声大等情况,最后将色彩融合图像与纹理融合图像进行融合,以得到目标图像,可以降低目标图像的噪声,使目标图像的温流更清晰,亮度分布更接近可见光下的亮度分布。
结合本申请第一方面,在本申请第一方面的第一种实施方式中,该根据可见光图像与该红外光图像获取色彩融合图像,可以包括:
对该可见光图像进行色彩感知复原,以得到色彩感知复原图像;对该红外光图像按照预置的色彩对应关系进行色彩推理,以得到色彩推理图像;将该色彩感知复原图像与该色彩推理图像进行融合,以得到该色彩融合图像。
在本申请实施方式中,可以对可将光图像进行色彩感知复原,可以对可见光图像下的色彩进行感知复原,可以对部分缺失的色彩进行复原,因红外光图像中的色彩分量与可见光下的色彩分量有对应关系,可以按照预置的色彩对关系对红外光图像进行色彩推理,以得到色彩推理图像,然后将色彩感知复原图像与色彩推理图像进行融合,以得到色彩融合图像,可以通过色彩推理图像中的色彩分量填补可见光下的色彩缺失部分,偏色的部分或噪声大的部分,使色彩融合图像的色彩更完整,降低色彩融合图像中的噪声,进一步降低目标图像中的色彩噪声,改善色彩缺失或偏色等情况。
结合本申请第一方面或本申请第一方面的第一种实施方式,在本申请第一方面的第二种实施方式中,该将该第一亮度信息与该第二亮度信息进行融合,以得到对比度融合图像,可以包括:
通过预置的第一公式对该第一亮度信息以及该第二亮度信息进行计算,以得到目标亮度值;通过该目标亮度值得到该对比度融合图像。
在本申请实施方式中,可以通过预置的第一公式对第一亮度信息与第二亮度信息进行计算,增加了一种得到对比融合图像的方式。
结合本申请第一方面、本申请第一方面的第一种实施方式或本申请第一方面的第二种实施方式中任一实施方式,在本申请第一方面的第三种实施方式中,该将该第一纹理信息、该第二纹理信息与该对比度融合图像进行融合,以得到纹理融合图像,可以包括:
通过预置的第二公式对该第一纹理信息以及该第二纹理信息进行计算,以得到目标纹理像素值;将该目标纹理像素值叠加到该对比度融合图像中,以得到该纹理融合图像。
在本申请实施方式中,可以通过预置的第二公式对第一纹理信息与第二纹理信息进行计算,增加了一种得到纹理融合图像的方式。
结合本申请第一方面的第一种实施方式,在本申请第一方面的第四种实施方式中,该对该红外光图像按照预置的色彩对应关系进行色彩推理,以得到色彩推理图像,可以包括:
对该红外光图像按照预置的色彩对应关系确定色彩分量的比值;根据该色彩分量的比值按照预置的计算方式确定目标色彩,以得到该色彩推理图像。
得到色彩推理图像的具体过程可以是通过根据红外光图像中的色彩分量按照预置的色彩对应关系确定色彩推理图像中色彩分量的比值,以得到色彩图像,增加了一种得到色 彩推理图像的方式。
结合本申请第一方面的第一种实施方式或本申请第一方面的第四种实施方式,在本申请第一方面的第五种实施方式中,该对该可见光图像进行色彩感知复原,以得到色彩感知复原图像,可以包括:
将该可见光图像的亮度反转,以得到亮度反转图像;根据透雾算法对该亮度反转图像进行计算,以得到亮度与色彩增强后的增强图像;将该增强图像进行反转,以得到该色彩感知复原图像。
在本申请实施方式中可以将可见光图像的亮度反转,然后通过透雾算法对反转后的可见光图像进行计算,以得到亮度与色彩都增强后的图像,然后将亮度与色彩都增强后的图像再进行反转,可以得到色彩与亮度都增强后的色彩感知图像。
结合本申请第一方面、本申请第一方面的第一种实施方式至本申请第一方面的第五种实施方式中任一实施方式,在本申请第一方面的第六种实施方式中,该通过对该纹理融合图像与该色彩融合图像进行融合,以得到目标图像,可以包括:
将该纹理融合图像中的亮度信息与该色彩融合图像中的色彩分量进行融合,以得到该目标图像。
从确定纹理融合图像中的亮度信息以及色彩融合图像中的色彩分量,然后将亮度信息与色彩分量进行叠加、或按比例计算等方式融合,以得到目标图像,可以使目标图像的色彩更完整,可以改善目标图像中的偏色,噪声大以及纹理不清晰,亮度分布与可见光下相差大等问题。
本申请第二方面提供一种图像处理装置,具有实现对应于上述本申请第一方面或第一方面任一实施方式中的图像处理的方法的功能。所述功能可以通过硬件实现,也可以通过硬件执行相应的软件实现。所述硬件或软件包括一个或多个与上述功能相对应的模块。
本申请第三方提供一种摄像装置,可以包括:
镜头、处理器、存储器、总线以及输入输出接口;该镜头,用于获取光学图像;该存储器,用于存储程序代码;该处理器调用该存储器中的程序代码时执行本申请第一方面或第一方面任一实施方式中的步骤。
本申请第四方面提供一种终端设备,其特征在于,包括:
镜头、处理器、存储器、总线以及输入输出接口;该镜头,用于获取光学图像;该存储器,用于存储程序代码;该处理器调用该存储器中的程序代码时执行本申请第一方面或第一方面任一实施方式中的步骤。
本申请第五方面提供一种存储介质,需要说明的是,本发的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产口的形式体现出来,该计算机软件产品存储在一个存储介质中,用于储存为上述设备所用的计算机软件指令,其包含用于执行上述第一方面所设计的程序。
该存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
本申请实施例第六方面提供一种包含指令的计算机程序产品,当其在计算机上运行时,使得计算机执行如本申请第一方面或第一方面任一可选实施方式中所述的方法。
本申请第七方面提供了一种芯片系统,该芯片系统包括处理器,用于支持图像处理装置实现上述第一方面中所涉及的功能,例如传输或处理上述方法中所涉及的数据和/或信 息。
在一种可能的设计中,所述芯片系统还包括存储器,所述存储器,用于保存图像处理的方法中必要的程序指令和数据。该芯片系统,可以由芯片构成,也可以包括芯片和其他分立器件。
从以上技术方案可以看出,本申请实施例具有以下优点:
在本申请实施例中,获取到可见光图像与红外光图像后,通过可将光图像与红外光图像的亮度信息得到对比度增强后的对比度融合图像,然后在进行纹理融合时,通过可见光图像与红外光图像的纹理信息进行纹理融合,以得到纹理更清晰的纹理融合图像,然后通过红外光图像与可见光图像获取到色彩融合图像。因此,通过对红外光图像与可见光图像得到色彩融合图像,然后将该色彩融合图像与纹理融合图像进行融合,得到目标图像,可以得到色彩更接近于实际色彩的目标图像,通过红外光图像与可见光图像的亮度信息与纹理信息确定纹理融合图像,相对于现有方案中仅按照红外光图像与可见光图像的纹理信息的比例来确定纹理融合图像,本申请实施例增加了参考可见光图像与红外光图像的对比度进行纹理融合,使得到的纹理融合图像纹理更清晰,目标图像的亮度与纹理更接近实际可见光下的亮度与纹理,减少图像的失真。
附图说明
图1为本申请实施例中图像处理的方法的框架图;
图2为现有方案中合成的图像示意图;
图3为本申请实施例中的图像处理的方法的一种实施例示意图;
图4为本申请实施例中的图像处理的方法的另一种实施例示意图;
图5为本申请实施例中的图像处理的方法的另一种实施例示意图;
图6为本申请实施例中的图像处理装置的一种实施例示意图;
图7为本申请实施例中的摄像装置的一种实施例示意图。
具体实施方式
本申请实施例提供了一种图像处理的方法以及相关设备,用于处理光学成像系统获取到的图像,对该图像进行对比度、纹理以及色彩的处理,特别在低照度的场景下,使得到的图像纹理更清晰,且纹理与色彩更接近实际的纹理与色彩。
摄像技术广泛地应用于日常、工业以及商业等领域,例如,监控设备在工业以及商业中具有重要的作用,也一直在追求提高监控设备所得到的图像的清晰度,但现有方案中的仅能在光照良好的情况下得到清晰的图像,在低照度的场景下,由于IR-CUT Filter(低通滤波器)过滤了监控设备周围环境中的红外光,造成环境中的红外光不能被有效地利用,降低了得到的图像的整体通光量。现有方案中直接将可见光图像与红外光图像通过融合算法进行融合,导致噪声的频率与图像细节的频率范围相差较小,因此合成后的图像无法区分噪声与图像细节,导致合成后的图像噪声过大,且由于仅使用可见光图像的色彩分量合成图像,在底照度的场景下,可见光图像中的色彩容易缺失以及偏色,导致了由可见光图像与红外光图像合成后的图像的色彩失真。以及在低照度下的场景下,红外光图像的纹理较可见光图像更清晰,但可见光图像的亮度分布与纹理细节有较大差异,合成后的图像中 的亮度与纹理更倾向于采用红外光图像的亮度与纹理,因此导致合成后的图像的纹理与实际可见光下的纹理差异大等问题。因此,为解决现有方案中色彩失真、亮度与纹理差异大,以及噪声大等问题,本申请实施例提供了一种图像处理的方法。
应理解,本申请实施例中所述的低照度的场景为照度低于阈值的场景,对于低照度的阈值可以是根据实际光学成像系统中的器件,包括传感器或分光器等特性进行调整,例如,若器件的特性较好,则可以调低该阈值,若器件的特性较低,则可以提高该阈值。
本申请实施例中图像处理的方法的框架如图1所示,其中,可以通过光学成像系统获取到可见光图像与红外光图像,该光学成像系统可以是监控设备的摄像头,也可以是终端设备或摄像机的摄像头,然后对该可见光图像与该红外光图像进行图像融合,得到目标图像,分别将可将光图像与红外光图像的亮度信息、纹理信息与色彩信息进行融合,得到清晰的目标图像,且可以使目标图像的纹理与色彩更接近实际的图像色彩。
本申请实施例提供的图像处理的方法可以使得到目标图像更接近实际的纹理与色彩,例如,现有方案中得到图像如图2所示,其中,在低照度的情况下,由于现有方案中在进行图像融合时,红外光图像的纹理较可见光图像的亮度与纹理更清晰,因此,红外光图像的亮度所占比例大于可见光图像的亮度,导致融合后的图像亮度与实际可见光下的亮度与纹理差异较大,例如,如图2的图像中合成后的“树木”亮度太亮,与实际可见光下的“树木”亮度差异较大。
因此,本申请实施例通过从可见光图像与红外光图像的分别获取亮度信息与纹理信息得到更清晰的纹理融合图像,以及通过由可见光图像与红外光图像的色彩信息得到色彩融合图像并与上述纹理融合图像合成目标图像,可以增强图像的色彩。本申请实施例中的图像处理的具体流程请参阅图3,包括:
301、获取可见光图像与红外光图像;
该可见光图像与红外光图像可以通过光学成像系统得到,例如,该可见光图像与红外光图像可以通过监控设备的摄像头得到,也可以通过移动电话的单个摄像头或多个摄像头得到。
302、获取第一亮度信息与第二亮度信息;
在获取到可见光图像与红外光图像后,分别从该可见光图像与红外光图像获取亮度信息,该第一亮度信息为可见光图像中的亮度信息,该第二亮度信息为红外光图像中的亮度信息。该第一亮度信息可以包括可见光图像中各个像素点的亮度值,该第二亮度信息可以包括红外光图像中各个像素点的亮度值。
303、将第一亮度信息与第二亮度信息进行融合,以得到对比度融合图像;
将第一亮度信息与第二亮度信息进行融合,以得到对比度融合图像,具体的融合方式可以是将该第一亮度信息中的亮度值与该第二亮度信息中的亮度值按比例进行计算,得到各个像素点的目标亮度值,然后由该各个像素点的目标亮度值组成对比度融合图像,可以使得到对比度融合图像的亮度更接近实际图像的亮度。该比例可以是根据公式得到,也可以是预置的比例。例如,可见光图像中的一个像素点的亮度值为200nit,红外光图像中对应的像素点的亮度值为400nit,则可以根据预置的公式,计算出可见光图像中的亮度值与红外光图像中的亮度值所占的比例,例如,计算得到该像素点的亮度值为320nit。
304、获取第一纹理信息与第二纹理信息;
从可见光图像中获取第一纹理信息,以及从红外光图像中获取第二纹理信息,在实际 场景中,红外光图像的纹理通常较可见光图像的纹理清晰,因此在合成图像时可以增加红外光图像纹理的比例。
需要说明的是,本申请实施例对步骤302与步骤304的执行顺序不作限定,可以先执行步骤302,也可以先执行步骤304,具体此处不作限定。
305、将第一纹理信息、第二纹理信息与对比度融合图像进行融合,以得到纹理融合图像;
在得到第一纹理信息与第二纹理信息后,可以获取可见光图像与红外光图像中所有纹理的像素值,可以对第一纹理信息中的像素值与第二纹理信息中的像素值进行计算,得到目标纹理像素值,然后将目标纹理像素值叠加至对比度融合图像中,以得到纹理融合图像,可以使得到的纹理融合图像纹理更清晰。
在低照度的场景中,可见光图像中的纹理细节丢失较多,红外光图像中的纹理细节较可见光图像中的纹理细节更丰富,红外光图像的噪声比可见光图像中的噪声少,因此在进行纹理融合时,可提高红外光图像中更丰富的纹理信息所占的比例,使得到的纹理融合图像中的纹理更清晰的纹理以及更少的噪声。
需要说明的是,本申请实施例对步骤302与步骤305的执行顺序不作限定,可以先执行步骤302,也可以先执行步骤305,具体此处不作限定。
306、根据可见光图像与红外光图像获取色彩融合图像;
从可见光图像与红外光图像中分别获取色彩信息,可以对可见光图像进行色彩感知复原得到可见光图像的色彩信息,对红外光图像根据预置的色彩对应关系进行色彩推理学习,得到红外光图像的色彩信息,以填补可见光图像中缺失的部分色彩。将可见光图像的色彩信息与红外光图像的色彩信息进行计算,得到色彩融合图像中的各个像素点的色彩分量,以得到该色彩融合图像。
在低照度的场景下,可见光图像中的色彩颗粒噪声大,色彩失真严重,而将从红外光图像推理学习得到的色彩信息与从可见光图像中感知复原得到的色彩信息进行融合得到的色彩融合图像的色彩噪声更低,且色彩更接近实际可见光下的色彩。
307、通过对纹理融合图像与色彩融合图像进行融合,以得到目标图像。
在得到纹理融合图像与色彩融合图像后,可以根据纹理融合图像得到目标图像的亮度分量,根据色彩融合图像得到目标图像的色彩分量,然后将亮度分量与色彩分量进行组合,得到目标图像。
在本申请实施例中,在获取可见光图像与红外光图像后,分别从可见光图像与红外光图像中获取亮度信息,并根据亮度信息得到对比度融合图像,然后根据从可见光图像与红外光图像中获取到的纹理信息与该对比度融合图像进行融合得到纹理融合图像。相比于原始的可见光图像与红外光图像,纹理融合图像中的纹理更清晰,且亮度分布更接近实际可见光的亮度分布。然后根据从可见光图像与红外光图像中分别获取到的色彩信息,得到色彩融合图像,可以使用从红外光图像推理得到的色彩填补可见光图像中缺失的色彩,使得到的色彩融合图像可以包括完整的色彩。因此,通过色彩融合图像与纹理融合图像得到的目标图像的纹理更清晰,亮度分布更接近实际光照的亮度,目标图像中的色彩更完整,并减少了因可见光图像的色彩缺失而导致的目标图像色彩缺失。且在进行目标图像的融合时,采用红外光图像与可见光图像中的亮度与纹理信息分别融合,可以降低合成的目标图像中的噪声。
下面对本申请实施例中图像处理的方法的具体步骤进行说明,请参阅图4,本申请实施例中图像处理方法的另一种实施例示意图。
首先通过光学成像系统分别获取可见光图像401与红外光图像402,然后对可见光图像401经过降噪器1,滤除可见光图像中的部分噪声,例如,颗粒噪声,对红外光图像402经过降噪器2,滤除可见光图像中的部分噪声。该降噪1与降噪器2可以是图像信号处理(image signal processing,ISP)降噪器,ISP降噪器可以对红外光图像与可见光图像进行图像处理,包括曝光控制、白平衡控制以及降噪等。经过ISP降噪器1与ISP降噪器2处理后,的图像为色彩和亮度分布准确的YUV(亮度信号Y与色度信号U、V)格式图像。即可以获取到可见光图像的亮度分量与红外光图像的亮度分量。
其中,光学成像系统可以是一个摄像头,也可以是多个摄像头,此处仅以一个摄像头为例,例如,如图5所示,镜头可以包括多层透镜,首先由镜头采集图像,然后可以通过分光棱镜分光,分别在传感器1生成可见光图像401,在传感器2生成红外光图像402。光学成像系统还可以由单独的成像装置分别直接生成可见光图像与红外光图像,具体可以根据实际设计需求调整,此处不作限定。
从滤除部分噪声后的可见光图像提取第一亮度信息与从红外光图像中提取第二亮度信息,并将第一亮度信息与第二亮度信息进行融合得到对比度融合图像405。具体的融合过程可以是,分别计算可见光图像中的局部对比度以及红外光图像中对应的局部对比度,然后根据预置的梯度特征计算可见光图像中的局部对比度以及红外光图像中对应的局部对比度中各个分量所占的权重。具体的对比度融合过程可以是,以红外光图像与可见光图像中对应的一个局部为例,当红外光图像的局部对比度和可见光图像的局部对比度与预置的梯度特征相差较大时,则在合成对比度融合图像时更倾向于红外光图像中的局部对比度,即红外光图像的局部对比度和可见光图像的局部对比度与预置的梯度特征相差较大时,红外光图像中的局部对比度所占的权重更大,将更多地采用红外光图像中的局部对比度作为对比度融合图像的局部对比度。
在实际应用中,具体的对比度融合过程可以是,以可见光图像与红外光图像中相对应的一个局部为例,该局部可以是一个像素矩阵,例如,6*6的像素矩阵。通过可见光图像中局部像素点的第一权重矩阵,以及对应的第一局部图像窗口获取可见光图像局部的第一亮度分量,该第一权重矩阵为该可见光图像局部的图像窗口中的像素点的权重矩阵,该第一权重矩阵可以是预置的,也可以是根据实际亮度值数据计算得到。通过红外光图像中局部的第二权重矩阵,以及对应的局部图像窗口获取红外光图像对应的局部的第二亮度分量,该第二权重矩阵为该可见光图像的局部图像窗口中的像素点的权重矩阵,该第二权重矩阵可以是预置的,也可以是由根据实际数据计算得到。然后根据第一公式以及该第一亮度分量与第二亮度分量进行合适的亮度值s的计算,该第一公式可以是,
Figure PCTCN2018123383-appb-000001
其中,p为可见光图像的亮度值,W为预置的固定矩阵,Q为红外光图像的亮度值,μ为预设的系数,可根据实际需求调整,s i为像素点i的亮度值,在计算得到各个像素点的目标亮度值s后,将得到的亮度值迁移到红外光图像,可根据该s得到对比度迁移后的红外图像的变换矩阵x’,该x’=x*s,以得到对比度融合图像。
在得到对比度融合图像405后,还需要通过该对比融合图像得到纹理融合图像406。 从滤除部分噪声后的可见光图像中提取第一纹理信息,以及从滤除部分噪声后的红外光图像中提取第二纹理信息,然后将该第一纹理信息与第二纹理信息所包括的像素值进行计算叠加,将该第一纹理信息与第二纹理信息包括的像素值叠加至对比度融合图像中,以得到纹理融合图像。具体的过程可以是,计算可见光图像中的细节以及红外光图像中的细节,然后再根据预置的公式计算出每个细节纹理的最优像素值,即目标像素值,然后将该每个细节纹理的最优像素值叠加至对比度融合图像中,得到该纹理融合图像406。
具体地,以一个像素点的融合为例,可见光图像部分可以包括:获取当前可见光像素点x ο(ι*),以及非局部平均滤波可见光的像素点的值x o,b,相减可得到可见光图像中的可见光纹理细节Δx ο(ι*)=x ο(ι*)-x o,b;红外光图像部分可以包括:获取非局部平均滤波红外光的像素值x n,以及当前红外光图像像素点的值为x n,b,相减可得到红外光图像中的红外光纹理细节Δx n=x n-x n,b,然后根据预置的第二公式计算最优的纹理细节值Δx,该第二公式可以是:
Figure PCTCN2018123383-appb-000002
其中,μ d为预置的系数,可根据实际需求进行调整,f j为预置的局部加权矩阵,将计算得到的图像像素值叠加到对比度融合图像中,得到纹理融合图像中的像素点的值为x ο(ι*)←x o,b+Δx。
此外,本发明实施例中还对降噪后的可见光图像与降噪后的红外光图像进行色彩融合,以得到色彩融合图像409。进行色彩融合的具体过程可以是,对滤除部分噪声后可见光图像进行色彩感知复原,以得到色彩感知复原图像407,进行色彩感知复原的具体过程可以是,将滤除部分噪声后可见光图像进行亮度反转,得到亮度反转后的可见光图像,然后通过透雾算法对亮度反转后的可见光图像的亮度以及色彩进行增强,然后再将亮度以及色彩进行增强的反转图像进行反转,得到亮度与色彩增强后的可见光图像。例如,计算亮度反转后的可见光图像中各个相邻像素点灰度值的比例关系,然后通过该比例关系对各个像素点的灰度值进行校正,之后对校正后的像素点的灰度值进行线性增强,以得到增强后的反转图像,将该反转图像进行反转,即可得到该亮度与色彩增强后的可见光图像。
而对于可见光图像中部分丢失的色彩信息或偏色过大的部分图像,则无法通过色彩感知复原进行修正,因此,若可见光图像中的部分图像亮度过低或噪声过大时,还可以通过对红外光图像进行色彩推理,以进一步修正可见光图像中的色彩缺失或偏色等。
红外光图像中的色彩分量与可见光图像中的色彩分量具有对应关系,该对应关系可以是预置的,也可以通过大量数据以及机器学习的方式得到红外光图像中的RGB(red、green、blue,红、蓝、绿)分量与可见光图像中的色彩的对应关系。因此,可以通过该对应关系对红外光图像中的色彩分量进行推理,得到与可见光图像的色彩对应的图像,以得到色彩推理图像408,可利用该色彩推理图像对可见光中缺失的色彩或偏色的部分进行修正,以得到色彩更接近实际光照下的图像。
应理解,本申请实施例对色彩感知复原图像407与色彩推理图像408的获取顺序不作限定,可以先获取色彩感知复原图像407,也可以先获取色彩推理图像408,具体可根据 实际需求调整,具体此处不作限定。
因此,可以将色彩感知复原图像407与色彩推理图像408进行融合,以得到色彩融合图像409。在实际应用中,可以通过参考可见光图像的亮度值来确定进行色彩修正,若色彩感知复原图像中的部分亮度过低或噪声过大,则可以提高色彩推理图像中对应部分的参考占比,即可以使用该色彩推理图像中对应部分的色彩分量修正该亮度过低或噪声过大的部分的色彩,以得到色彩更完整的色彩融合图像。因此,使用可见光图像与红外光图像共同进行确定目标图像的色彩,可以改善在低照度的场景下得到目标图像的色彩噪声、颜色失真与脏污的问题。
应理解,本申请实施例对纹理融合图像406、色彩融合图像409的获取顺序不作限定,可以先获取纹理融合图像406,也可以先获取色彩融合图像409,具体可根据实际需求调整,具体此处不作限定。
在得到纹理融合图像406与色彩融合图像409后,对纹理融合图像与色彩融合图像进行融合,将纹理融合图像中的纹理细节与色彩融合图像中的色彩分量进行叠加组合,以得到目标图像410。
在本申请实施例中,分别从可见光图像与红外光图像中获取亮度信息,并根据亮度信息得到对比度融合图像,然后根据从可见光图像与红外光图像中获取到的纹理信息与该对比度融合图像进行融合得到纹理融合图像,得到的纹理融合图像中的纹理更清晰,且亮度分布更接近实际可见光的亮度分布。然后根据从可见光图像与红外光图像中分别获取到的色彩信息,得到色彩融合图像,可以使用从红外光图像推理得到的色彩填补可见光图像中缺失的色彩,使得到的色彩融合图像可以包括完整的色彩。因此,通过色彩融合图像与纹理融合图像得到的目标图像的纹理更清晰,亮度分布更接近实际光照的亮度,使目标图像包括的色彩更完整,减少因可见光图像的色彩缺失而导致的目标图像色彩缺失。且在进行目标图像的融合时,采用红外光图像与可见光图像中的亮度与纹理信息分别融合,可以降低合成的目标图像中的噪声,可以减少合成后的图像的色彩丢失或偏色等,可以改善在低照度下得到目标图像的色彩噪声、色彩失真与脏污的问题。
前述对本申请实施例中的图像处理的方法进行了详细说明,下面对本申请实施例中的图像处理装置进行阐述,请参阅图6,本申请实施例中图像处理装置的一种实施例示意图,可以包括:
图像获取模块601,用于获取可见光图像与红外光图像;
亮度信息获取模块602,还用于获取第一亮度信息与第二亮度信息,第一亮度信息为可见光图像的亮度信息,第二亮度信息为红外光图像的亮度信息;
对比度融合模块603,用于将第一亮度信息与第二亮度信息进行融合,以得到对比度融合图像;
纹理信息获取模块604,还用于获取第一纹理信息与第二纹理信息,第一纹理信息为可见光图像的纹理信息,第二纹理信息为红外光图像的纹理信息;
纹理融合模块605,用于将第一纹理信息、第二纹理信息与对比度融合图像进行融合,以得到纹理融合图像;
色彩融合模块606,用于根据可见光图像与红外光图像获取色彩融合图像;
目标合成模块607,用于通过对纹理融合图像与色彩融合图像进行融合,以得到目标图像。
可选地,在一些可能的实施例中,色彩融合模块606,可以包括:
感知复原子模块6061,用于对可见光图像进行色彩感知复原,以得到色彩感知复原图像;
色彩推理子模块6062,用于对红外光图像按照预置的色彩对应关系进行色彩推理,以得到色彩推理图像;
色彩融合子模块6063,用于将色彩感知复原图像与色彩推理图像进行融合,以得到色彩融合图像。
可选地,在一些可能的实施例中,对比度融合模块603,具体用于:
通过预置的第一公式对第一亮度信息以及第二亮度信息进行计算,以得到目标亮度值;
通过目标亮度值得到对比度融合图像。
可选地,在一些可能的实施例中,纹理融合模块605,具体用于:
通过预置的第二公式对第一纹理信息以及第二纹理信息进行计算,以得到目标纹理像素值;
将目标纹理像素值叠加到对比度融合图像中,以得到纹理融合图像。
可选地,在一些可能的实施例中,色彩推理子模块6062,具体用于:
对红外光图像按照预置的色彩对应关系确定色彩分量的比值;
根据色彩分量的比值按照预置的计算方式确定目标色彩,以得到色彩推理图像。
可选地,在一些可能的实施例中,感知复原子模块6061,具体用于:
将可见光图像的亮度反转,以得到亮度反转图像;
根据透雾算法对亮度反转图像进行计算,以得到亮度与色彩增强后的增强图像;
将增强图像进行反转,以得到色彩感知复原图像。
可选地,在一些可能的实施例中,目标图像合成模块607,具体用于:
将纹理融合图像中的亮度信息与色彩融合图像中的色彩分量进行融合,以得到目标图像。
在另一种可能的设计中,当该图像处理装置为终端内的芯片时,芯片包括:处理单元和通信单元,所述处理单元例如可以是处理器,所述通信单元例如可以是输入/输出接口、管脚或电路等。该处理单元可执行存储单元存储的计算机执行指令,以使该终端内的芯片执行上述第一方面任意一项的无线通信方法。可选地,所述存储单元为所述芯片内的存储单元,如寄存器、缓存等,所述存储单元还可以是所述终端内的位于所述芯片外部的存储单元,如只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)等。
其中,上述任一处提到的处理器,可以是一个通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制上述第一方面无线通信方法的程序执行的集成电路。
本发明实施例还提供了一种摄像装置,如图7所示,为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该摄像装置可以为包括移动电话、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备:
图7示出的是与本发明实施例提供的摄像装置的部分结构的框图。参考图7,摄像装置包括:射频(Radio Frequency,RF)电路710、存储器720、输入单元730、显示单元 740、传感器750、音频电路760、镜头770、处理器780、以及电源790等部件。本领域技术人员可以理解,图7中示出的摄像装置结构并不构成对摄像装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
下面结合图7对摄像装置的各个构成部件进行具体的介绍:
RF电路710可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器780处理;另外,将设计上行的数据发送给基站。通常,RF电路710包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路710还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。
存储器720可用于存储软件程序以及模块,处理器780通过运行存储在存储器720的软件程序以及模块,从而执行摄像装置的各种功能应用以及数据处理。存储器720可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据摄像装置的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器720可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。
输入单元730可用于接收输入的数字或字符信息,以及产生与摄像装置的用户设置以及功能控制有关的键信号输入。具体地,输入单元730可包括触控面板731以及其他输入设备732。触控面板731,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触笔等任何适合的物体或附件在触控面板731上或在触控面板731附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板731可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器780,并能接收处理器780发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板731。除了触控面板731,输入单元730还可以包括其他输入设备732。具体地,其他输入设备732可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。
显示单元740可用于显示由用户输入的信息或提供给用户的信息以及摄像装置的各种菜单。显示单元740可包括显示面板741,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板741。进一步的,触控面板731可覆盖显示面板741,当触控面板731检测到在其上或附近的触摸操作后,传送给处理器780以确定触摸事件的类型,随后处理器780根据触摸事件的类型在显示面板741上提供相应的视觉输出。虽然在图7中,触控面板731与显示面板741是作为两个独立的部件来实现摄像装置的输入和输入功能,但是在某些实施例中,可以将触控面板731与显示面板741集成而实现摄像装置的输入和输出功能。
摄像装置还可包括至少一种传感器750,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板741的亮度,接近传感器可在摄像装置移动到耳边时,关闭显示面板741和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别摄像装置姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于摄像装置还可配置的陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
音频电路760、扬声器761,传声器762可提供用户与摄像装置之间的音频接口。音频电路760可将接收到的音频数据转换后的电信号,传输到扬声器761,由扬声器761转换为声音信号输出;另一方面,传声器762将收集的声音信号转换为电信号,由音频电路760接收后转换为音频数据,再将音频数据输出处理器780处理后,经RF电路710以发送给比如另一摄像装置,或者将音频数据输出至存储器720以便进一步处理。
摄像装置中的镜头770可以获取光学图像,包括红外光图像和/或可见光图像,其中,摄像装置中的镜头可以是一个,也可以是至少两个(图中未示出),具体可根据实际设计需求调整。
处理器780是摄像装置的控制中心,利用各种接口和线路连接整个摄像装置的各个部分,通过运行或执行存储在存储器720内的软件程序和/或模块,以及调用存储在存储器720内的数据,执行摄像装置的各种功能和处理数据,从而对摄像装置进行整体监控。可选的,处理器780可包括一个或多个处理单元;优选的,处理器780可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器780中。
摄像装置还包括给各个部件供电的电源790(比如电池),优选的,电源可以通过电源管理系统与处理器780逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。
尽管未示出,摄像装置还可以包括摄像头、蓝牙模块等,在此不再赘述。
在本发明实施例中,该摄像装置所包括的处理器780还具有以下功能:
获取可见光图像与红外光图像;获取第一亮度信息与第二亮度信息,该第一亮度信息为该可见光图像的亮度信息,该第二亮度信息为该红外光图像的亮度信息;将该第一亮度信息与该第二亮度信息进行融合,以得到对比度融合图像;获取第一纹理信息与第二纹理信息,该第一纹理信息为该可见光图像的纹理信息,该第二纹理信息为该红外光图像的纹理信息;将该第一纹理信息、该第二纹理信息与该对比度融合图像进行融合,以得到纹理融合图像;根据该可见光图像与该红外光图像获取色彩融合图像;通过对该纹理融合图像与该色彩融合图像进行融合,以得到目标图像。
本申请提供的终端设备可以是移动电话、摄像机、监控器或平板电脑等,该终端设备还可以包括一个或多个镜头,该终端设备与前述图7所示的摄像装置类似,具体此处不再赘述。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本申请各个实施例图3至图5所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,以上实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (15)

  1. 一种图像处理的方法,其特征在于,包括:
    获取可见光图像与红外光图像;
    获取第一亮度信息与第二亮度信息,所述第一亮度信息为所述可见光图像的亮度信息,所述第二亮度信息为所述红外光图像的亮度信息;
    将所述第一亮度信息与所述第二亮度信息进行融合,以得到对比度融合图像;
    获取第一纹理信息与第二纹理信息,所述第一纹理信息为所述可见光图像的纹理信息,所述第二纹理信息为所述红外光图像的纹理信息;
    将所述第一纹理信息、所述第二纹理信息与所述对比度融合图像进行融合,以得到纹理融合图像;
    获取所述可见光图像与所述红外光图像的色彩融合图像;
    将所述纹理融合图像与所述色彩融合图像融合,以得到目标图像。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述可见光图像与所述红外光图像获取色彩融合图像,包括:
    对所述可见光图像进行色彩感知复原,以得到色彩感知复原图像;
    对所述红外光图像按照预置的色彩对应关系进行色彩推理,以得到色彩推理图像;
    将所述色彩感知复原图像与所述色彩推理图像进行融合,以得到所述色彩融合图像。
  3. 根据权利要求1或2所述的方法,其特征在于,所述将所述第一亮度信息与所述第二亮度信息进行融合,以得到对比度融合图像,包括:
    通过预置的第一公式对所述第一亮度信息以及所述第二亮度信息进行计算,以得到目标亮度值;
    通过所述目标亮度值得到所述对比度融合图像。
  4. 根据权利要求1-3中任一项所述的方法,其特征在于,所述将所述第一纹理信息、所述第二纹理信息与所述对比度融合图像进行融合,以得到纹理融合图像,包括:
    通过预置的第二公式对所述第一纹理信息以及所述第二纹理信息进行计算,以得到目标纹理像素值;
    将所述目标纹理像素值叠加到所述对比度融合图像中,以得到所述纹理融合图像。
  5. 根据权利要求2所述的方法,其特征在于,所述对所述红外光图像按照预置的色彩对应关系进行色彩推理,以得到色彩推理图像,包括:
    对所述红外光图像按照预置的色彩对应关系确定色彩分量的比值;
    根据所述色彩分量的比值按照预置的计算方式确定目标色彩,以得到所述色彩推理图像。
  6. 根据权利要求2所述的方法,其特征在于,所述对所述可见光图像进行色彩感知复原,以得到色彩感知复原图像,包括:
    将所述可见光图像的亮度反转,以得到亮度反转图像;
    根据透雾算法对所述亮度反转图像进行计算,以得到亮度与色彩增强后的增强图像;
    将所述增强图像进行反转,以得到所述色彩感知复原图像。
  7. 根据权利要求1-6中任一项所述的方法,其特征在于,所述通过对所述纹理融合图像与所述色彩融合图像进行融合,以得到目标图像,包括:
    将所述纹理融合图像中的亮度信息与所述色彩融合图像中的色彩分量进行融合,以得到所述目标图像。
  8. 一种图像处理装置,其特征在于,包括:
    图像获取模块,用于获取可见光图像与红外光图像;
    亮度信息获取模块,还用于获取第一亮度信息与第二亮度信息,所述第一亮度信息为所述可见光图像的亮度信息,所述第二亮度信息为所述红外光图像的亮度信息;
    对比度融合模块,用于将所述第一亮度信息与所述第二亮度信息进行融合,以得到对比度融合图像;
    纹理信息获取模块,还用于获取第一纹理信息与第二纹理信息,所述第一纹理信息为所述可见光图像的纹理信息,所述第二纹理信息为所述红外光图像的纹理信息;
    纹理融合模块,用于将所述第一纹理信息、所述第二纹理信息与所述对比度融合图像进行融合,以得到纹理融合图像;
    色彩融合模块,用于根据所述可见光图像与所述红外光图像获取色彩融合图像;
    目标图像合成模块,用于通过对所述纹理融合图像与所述色彩融合图像进行融合,以得到目标图像。
  9. 根据权利要求8所述的图像处理装置,其特征在于,所述色彩融合模块,包括:
    感知复原子模块,用于对所述可见光图像进行色彩感知复原,以得到色彩感知复原图像;
    色彩推理子模块,用于对所述红外光图像按照预置的色彩对应关系进行色彩推理,以得到色彩推理图像;
    色彩融合子模块,用于将所述色彩感知复原图像与所述色彩推理图像进行融合,以得到所述色彩融合图像。
  10. 根据权利要求8或9所述的图像处理装置,其特征在于,所述对比度融合模块,具体用于:
    通过预置的第一公式对所述第一亮度信息以及所述第二亮度信息进行计算,以得到目标亮度值;
    通过所述目标亮度值得到所述对比度融合图像。
  11. 根据权利要求8-10中任一项所述的图像处理装置,其特征在于,所述纹理融合模块,具体用于:
    通过预置的第二公式对所述第一纹理信息以及所述第二纹理信息进行计算,以得到目标纹理像素值;
    将所述目标纹理像素值叠加到所述对比度融合图像中,以得到所述纹理融合图像。
  12. 根据权利要求9所述的图像处理装置,其特征在于,所述色彩推理子模块,具体用于:
    对所述红外光图像按照预置的色彩对应关系确定色彩分量的比值;
    根据所述色彩分量的比值按照预置的计算方式确定目标色彩,以得到所述色彩推理图像。
  13. 根据权利要求9所述的图像处理装置,其特征在于,所述感知复原子模块,具体用于:
    将所述可见光图像的亮度反转,以得到亮度反转图像;
    根据透雾算法对所述亮度反转图像进行计算,以得到亮度与色彩增强后的增强图像;
    将所述增强图像进行反转,以得到所述色彩感知复原图像。
  14. 根据权利要求8-13中任一项所述的图像处理装置,其特征在于,所述目标图像合成模块,具体用于:
    将所述纹理融合图像中的亮度信息与所述色彩融合图像中的色彩分量进行融合,以得到所述目标图像。
  15. 一种摄像装置,其特征在于,包括:
    镜头、处理器、存储器、总线以及输入输出接口;
    所述存储器中存储有程序代码;
    所述处理器调用所述存储器中的程序代码时执行权利要求1-7中任一项所述方法的步骤。
PCT/CN2018/123383 2018-02-09 2018-12-25 一种图像处理的方法以及相关设备 WO2019153920A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2020542815A JP6967160B2 (ja) 2018-02-09 2018-12-25 画像処理方法および関連デバイス
EP18905692.2A EP3734552A4 (en) 2018-02-09 2018-12-25 IMAGE PROCESSING PROCESS AND ASSOCIATED DEVICE
US16/943,497 US11250550B2 (en) 2018-02-09 2020-07-30 Image processing method and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810135739.1 2018-02-09
CN201810135739.1A CN110136183B (zh) 2018-02-09 2018-02-09 一种图像处理的方法、装置以及摄像装置

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US16/943,497 Continuation US11250550B2 (en) 2018-02-09 2020-07-30 Image processing method and related device

Publications (1)

Publication Number Publication Date
WO2019153920A1 true WO2019153920A1 (zh) 2019-08-15

Family

ID=67547901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/123383 WO2019153920A1 (zh) 2018-02-09 2018-12-25 一种图像处理的方法以及相关设备

Country Status (5)

Country Link
US (1) US11250550B2 (zh)
EP (1) EP3734552A4 (zh)
JP (1) JP6967160B2 (zh)
CN (1) CN110136183B (zh)
WO (1) WO2019153920A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021154807A1 (en) * 2020-01-28 2021-08-05 Gopro, Inc. Sensor prioritization for composite image capture
CN114830627A (zh) * 2020-11-09 2022-07-29 谷歌有限责任公司 红外光引导的肖像重照明

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108663677A (zh) * 2018-03-29 2018-10-16 上海智瞳通科技有限公司 一种多传感器深度融合提高目标检测能力的方法
US11410337B2 (en) * 2018-03-30 2022-08-09 Sony Corporation Image processing device, image processing method and mobile body
CN112767289B (zh) * 2019-10-21 2024-05-07 浙江宇视科技有限公司 图像融合方法、装置、介质及电子设备
CN112712485A (zh) * 2019-10-24 2021-04-27 杭州海康威视数字技术股份有限公司 一种图像融合方法及装置
CN112785510B (zh) * 2019-11-11 2024-03-05 华为技术有限公司 图像处理方法和相关产品
CN111161356B (zh) * 2019-12-17 2022-02-15 大连理工大学 一种基于双层优化的红外和可见光融合方法
CN113014747B (zh) * 2019-12-18 2023-04-28 中移物联网有限公司 屏下摄像头模组、图像处理方法及终端
CN113711584B (zh) * 2020-03-20 2023-03-03 华为技术有限公司 一种摄像装置
CN111539902B (zh) * 2020-04-16 2023-03-28 烟台艾睿光电科技有限公司 一种图像处理方法、系统、设备及计算机可读存储介质
CN113538303B (zh) * 2020-04-20 2023-05-26 杭州海康威视数字技术股份有限公司 图像融合方法
CN111526366B (zh) * 2020-04-28 2021-08-06 深圳市思坦科技有限公司 图像处理方法、装置、摄像设备和存储介质
WO2021217428A1 (zh) * 2020-04-28 2021-11-04 深圳市思坦科技有限公司 图像处理方法、装置、摄像设备和存储介质
CN113763295B (zh) * 2020-06-01 2023-08-25 杭州海康威视数字技术股份有限公司 图像融合方法、确定图像偏移量的方法及装置
CN111667446B (zh) * 2020-06-01 2023-09-01 上海富瀚微电子股份有限公司 图像处理方法
TWI767468B (zh) * 2020-09-04 2022-06-11 聚晶半導體股份有限公司 雙感測器攝像系統及其攝像方法
CN112258442A (zh) * 2020-11-12 2021-01-22 Oppo广东移动通信有限公司 图像融合方法、装置、计算机设备和存储介质
CN112396572B (zh) * 2020-11-18 2022-11-01 国网浙江省电力有限公司电力科学研究院 基于特征增强和高斯金字塔的复合绝缘子双光融合方法
CN112541903A (zh) * 2020-12-16 2021-03-23 深圳市欢太科技有限公司 页面比对方法、装置、电子设备及计算机存储介质
CN112767291A (zh) * 2021-01-04 2021-05-07 浙江大华技术股份有限公司 可见光图像和红外图像融合方法、设备及可读存储介质
CN112884688B (zh) * 2021-02-03 2024-03-29 浙江大华技术股份有限公司 一种图像融合方法、装置、设备及介质
CN112767298B (zh) * 2021-03-16 2023-06-13 杭州海康威视数字技术股份有限公司 一种可见光图像和红外图像的融合方法、装置
CN113344834B (zh) * 2021-06-02 2022-06-03 深圳兆日科技股份有限公司 图像拼接方法、装置及计算机可读存储介质
CN113421195B (zh) * 2021-06-08 2023-03-21 杭州海康威视数字技术股份有限公司 一种图像处理方法、装置及设备
CN113298177B (zh) * 2021-06-11 2023-04-28 华南理工大学 夜间图像着色方法、装置、介质和设备
CN115908518B (zh) * 2023-01-09 2023-05-09 四川赛狄信息技术股份公司 一种多传感图像融合方法及系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973990A (zh) * 2014-05-05 2014-08-06 浙江宇视科技有限公司 宽动态融合方法及装置
CN104732507A (zh) * 2015-04-02 2015-06-24 西安电子科技大学 基于纹理信息重构的不同光照两帧图像融合方法
CN105303598A (zh) * 2015-10-23 2016-02-03 浙江工业大学 基于纹理传输的多风格视频艺术化处理方法
CN106600572A (zh) * 2016-12-12 2017-04-26 长春理工大学 一种自适应的低照度可见光图像和红外图像融合方法
CN107346552A (zh) * 2017-06-23 2017-11-14 南京信息工程大学 基于图像灰度恢复形状技术的纹理力触觉再现方法

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006072401A (ja) * 2004-08-31 2006-03-16 Fujitsu Ltd 画像複合装置および画像複合方法
US7786898B2 (en) * 2006-05-31 2010-08-31 Mobileye Technologies Ltd. Fusion of far infrared and visible images in enhanced obstacle detection in automotive applications
US8520970B2 (en) * 2010-04-23 2013-08-27 Flir Systems Ab Infrared resolution and contrast enhancement with fusion
JP2011239259A (ja) * 2010-05-12 2011-11-24 Sony Corp 画像処理装置、画像処理方法及びプログラム
JP5669513B2 (ja) * 2010-10-13 2015-02-12 オリンパス株式会社 画像処理装置、画像処理プログラム、及び、画像処理方法
KR101858646B1 (ko) * 2012-12-14 2018-05-17 한화에어로스페이스 주식회사 영상 융합 장치 및 방법
CN103793896B (zh) * 2014-01-13 2017-01-18 哈尔滨工程大学 一种红外图像与可见光图像的实时融合方法
CN106576159B (zh) * 2015-06-23 2018-12-25 华为技术有限公司 一种获取深度信息的拍照设备和方法
CN104966108A (zh) * 2015-07-15 2015-10-07 武汉大学 一种基于梯度传递的可见光与红外图像融合方法
CN105069768B (zh) * 2015-08-05 2017-12-29 武汉高德红外股份有限公司 一种可见光图像与红外图像融合处理系统及融合方法
CN105513032A (zh) * 2015-11-27 2016-04-20 小红象医疗科技有限公司 一种红外医学影像与人体切片图像融合的方法
CN109478315B (zh) * 2016-07-21 2023-08-01 前视红外系统股份公司 融合图像优化系统和方法
CN106548467B (zh) * 2016-10-31 2019-05-14 广州飒特红外股份有限公司 红外图像和可见光图像融合的方法及装置
CN106875370B (zh) * 2017-01-24 2020-11-06 中国科学院空间应用工程与技术中心 一种全色图像和多光谱图像的融合方法及装置
CN107133558B (zh) * 2017-03-13 2020-10-20 北京航空航天大学 一种基于概率传播的红外行人显著性检测方法
US11055877B2 (en) * 2017-07-13 2021-07-06 Nec Corporation Image processing device, image processing method, and program storage medium
CN110363732A (zh) * 2018-04-11 2019-10-22 杭州海康威视数字技术股份有限公司 一种图像融合方法及其装置
CN110660088B (zh) * 2018-06-30 2023-08-22 华为技术有限公司 一种图像处理的方法和设备
WO2020051897A1 (zh) * 2018-09-14 2020-03-19 浙江宇视科技有限公司 图像融合方法、系统、电子设备和计算机可读介质
JP7262021B2 (ja) * 2018-09-18 2023-04-21 パナソニックIpマネジメント株式会社 奥行取得装置、奥行取得方法およびプログラム
CN110971889A (zh) * 2018-09-30 2020-04-07 华为技术有限公司 一种获取深度图像的方法、摄像装置以及终端

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973990A (zh) * 2014-05-05 2014-08-06 浙江宇视科技有限公司 宽动态融合方法及装置
CN104732507A (zh) * 2015-04-02 2015-06-24 西安电子科技大学 基于纹理信息重构的不同光照两帧图像融合方法
CN105303598A (zh) * 2015-10-23 2016-02-03 浙江工业大学 基于纹理传输的多风格视频艺术化处理方法
CN106600572A (zh) * 2016-12-12 2017-04-26 长春理工大学 一种自适应的低照度可见光图像和红外图像融合方法
CN107346552A (zh) * 2017-06-23 2017-11-14 南京信息工程大学 基于图像灰度恢复形状技术的纹理力触觉再现方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3734552A4

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021154807A1 (en) * 2020-01-28 2021-08-05 Gopro, Inc. Sensor prioritization for composite image capture
CN114830627A (zh) * 2020-11-09 2022-07-29 谷歌有限责任公司 红外光引导的肖像重照明

Also Published As

Publication number Publication date
CN110136183A (zh) 2019-08-16
CN110136183B (zh) 2021-05-18
JP2021513278A (ja) 2021-05-20
EP3734552A4 (en) 2021-03-24
US20200357104A1 (en) 2020-11-12
JP6967160B2 (ja) 2021-11-17
US11250550B2 (en) 2022-02-15
EP3734552A1 (en) 2020-11-04

Similar Documents

Publication Publication Date Title
WO2019153920A1 (zh) 一种图像处理的方法以及相关设备
TWI696146B (zh) 影像處理方法、裝置、電腦可讀儲存媒體和行動終端
US10827140B2 (en) Photographing method for terminal and terminal
JP6803982B2 (ja) 光学撮像方法および装置
US9536479B2 (en) Image display device and method
CN111418201A (zh) 一种拍摄方法及设备
WO2017071219A1 (zh) 检测皮肤区域的方法和检测皮肤区域的装置
WO2017088564A1 (zh) 一种图像处理方法及装置、终端、存储介质
CN108200347A (zh) 一种图像处理方法、终端和计算机可读存储介质
WO2021093712A1 (zh) 图像处理方法和相关产品
WO2021143281A1 (zh) 颜色阴影校正方法、终端设备及计算机可读存储介质
WO2023005870A1 (zh) 一种图像处理方法及相关设备
CN113507558A (zh) 去除图像眩光的方法、装置、终端设备和存储介质
WO2014098143A1 (ja) 画像処理装置、撮像装置、画像処理方法、画像処理プログラム
CN112243117B (zh) 图像处理装置、方法及摄像机
WO2018119787A1 (zh) 一种去马赛克方法及装置
CN115516494A (zh) 用于生成图像的方法及其电子装置
WO2024078275A1 (zh) 一种图像处理的方法、装置、电子设备及存储介质
TW202304189A (zh) 執行場景相關鏡頭陰影校正的方法及系統

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18905692

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018905692

Country of ref document: EP

Effective date: 20200729

Ref document number: 2020542815

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE