WO2017080236A1 - 图像处理方法及装置 - Google Patents

图像处理方法及装置 Download PDF

Info

Publication number
WO2017080236A1
WO2017080236A1 PCT/CN2016/089024 CN2016089024W WO2017080236A1 WO 2017080236 A1 WO2017080236 A1 WO 2017080236A1 CN 2016089024 W CN2016089024 W CN 2016089024W WO 2017080236 A1 WO2017080236 A1 WO 2017080236A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
grids
mesh
grid
value
Prior art date
Application number
PCT/CN2016/089024
Other languages
English (en)
French (fr)
Inventor
王文平
李礼
Original Assignee
乐视控股(北京)有限公司
乐视移动智能信息技术(北京)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 乐视控股(北京)有限公司, 乐视移动智能信息技术(北京)有限公司 filed Critical 乐视控股(北京)有限公司
Publication of WO2017080236A1 publication Critical patent/WO2017080236A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/62Detection or reduction of noise due to excess charges produced by the exposure, e.g. smear, blooming, ghost image, crosstalk or leakage between pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to an image processing method and apparatus.
  • the high-pass algorithm divides the denoising parameters into six groups, that is, six different regions, according to different gain values, and the six regions cover all scenes from the brightest to the darkest by interpolating each other. For a picture, first determine its digital camera's International Standardization Organization (ISO), which is the exposure index, and then denoise according to the denoising parameters of the corresponding area. The Qualcomm algorithm only judges through the whole picture, and then denoises it with a unified denoising parameter. The denoising effect is poor for the part with large difference between light and dark in the image.
  • ISO International Standardization Organization
  • the embodiment of the present invention provides an image processing method and device, which are used to solve the denoising effect of a portion with a large difference between light and dark in an image when the image is denoised by the high-pass algorithm. Poor question.
  • an image processing method including:
  • an image processing apparatus including:
  • a first obtaining module configured to acquire an ISO value of sensitivity of each mesh in an area covered by each object
  • a calculation module for calculating an average value of each grid ISO value in a coverage area of each object
  • a second acquiring module configured to obtain a number of the first meshes in the coverage area of each object; wherein the first mesh is the ISO value corresponding to the mesh in the coverage area of each object and the average value Any of the grids whose difference exceeds the preset value;
  • a third obtaining module configured to acquire, according to the preset number of grids, the number of exposure values corresponding to each of the first grids;
  • a denoising module configured to denoise the image portion corresponding to the first grid by using the exposure index corresponding to each of the first grids, and use the corresponding The exposure index denoises the image portion corresponding to the other mesh except the first mesh to obtain a target image.
  • an embodiment of the present application provides an image processing apparatus, including a memory, one or more processors, and one or more programs, wherein the one or more programs are in the one or more processors. Execute the following operations: identify the objects contained in the image to be processed and perform mesh division; obtain the ISO value of the sensitivity of each mesh in the area covered by each object; calculate the ISO value of each grid in the coverage area of each object The average value of the first grid in the coverage area of each object; wherein the first grid is the difference between the ISO value corresponding to the grid in the coverage area of each object and the average value Any one of all the grids of the preset value; if the number exceeds the preset number of grids, obtain the number of exposure values corresponding to each of the first grids; using each of the first The exposure index corresponding to the mesh denoising the image portion corresponding to the first mesh, and using the exposure index corresponding to the average value to correspond to other meshes other than the first mesh Part of the image is denoised To obtain the target image.
  • embodiments of the present application provide a computer readable storage medium having stored thereon computer executable instructions that, in response to execution, cause an image processing apparatus to perform an operation,
  • the operation includes: identifying an object included in the image to be processed and performing mesh division; acquiring an ISO value of sensitivity of each mesh in the area covered by each object; and calculating an average value of each grid ISO value in the coverage area of each object; Statistically obtaining the number of the first grids in the coverage area of each object; wherein the first grid is the difference between the ISO value corresponding to the grid and the average value in the coverage area of each object exceeding a preset value Any one of all the grids; if the number exceeds the preset number of grids, obtain the number of exposure values corresponding to each of the first grids; use corresponding to each of the first grids The exposure index denoising the image portion corresponding to the first mesh, and using the exposure index corresponding to the average value to image portions corresponding to other meshes other than the first mesh Denoising line, to obtain the target image
  • the image processing method and device of the embodiment of the present application obtains the ISO value of each mesh in the area covered by each object by identifying the objects included in the image to be processed and performing mesh division, and calculates each coverage area of each object.
  • the embodiment of the present application avoids the problem that the entire image is denoised by only one exposure index to make the noise level in the light and dark areas inconsistent, and the image denoising effect is better by local dynamic denoising.
  • FIG. 1 is a schematic flowchart of an image processing method according to Embodiment 1 of the present application.
  • FIG. 2 is a schematic flowchart of an image processing method according to Embodiment 2 of the present application.
  • FIG. 3 is a schematic flowchart of an image processing method according to Embodiment 3 of the present application.
  • FIG. 4 is a schematic structural diagram of an image processing apparatus according to Embodiment 4 of the present application.
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to Embodiment 5 of the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a computer program product for image processing according to Embodiment 6 of the present application.
  • FIG. 1 it is a schematic flowchart of an image processing method according to Embodiment 1 of the present application, and the image processing method includes:
  • Step 101 Identify objects included in the image to be processed and perform mesh division.
  • Object recognition can be performed on the image before denoising the image to be processed. Specifically, firstly, through a large number of object images with relatively obvious Haar features, a pattern recognition method is used to train to obtain a classifier, and based on the classifier, different objects in the image to be processed can be identified by the haar feature. . After identifying different objects in the image to be processed, the image can be divided into 64 x 48 grids.
  • Step 102 Acquire an ISO value of sensitivity of each mesh in an area covered by each object.
  • Step 103 Calculate an average value of each grid ISO value in the coverage area of each object.
  • Step 104 Collect statistics on the number of the first grids in the coverage area of each object.
  • the first grid is any one of all grids in which the difference between the ISO value and the average value corresponding to the grid in each object coverage area exceeds a preset value.
  • the ISO value of each grid in each object coverage area is compared with the average value, if the difference corresponding to one grid exceeds the preset Value, the mesh becomes the first mesh.
  • the first grid is any one of all grids whose difference between the ISO value and the average value corresponding to the grid in each object coverage area exceeds a preset value. Further, the number of first meshes in each object is counted.
  • Step 105 If the number exceeds a preset number of grids, obtain the number of exposure values corresponding to each of the first grids.
  • the images corresponding to the first meshes are illustrated.
  • the brightness difference between the portion and the other grids is large, and the exposure index corresponding to each of the first grids needs to be obtained.
  • the corresponding exposure index is selected according to the ISO value corresponding to each first grid, and then each is based on each The exposure index corresponding to the first grid is separately exposed to the corresponding image portion.
  • Step 106 Denoise the image portion corresponding to the first grid by using the exposure index corresponding to each of the first grids, and use the exposure index pair corresponding to the average value to divide the first An image portion corresponding to other meshes other than the mesh is denoised to obtain the target image.
  • the present embodiment separately performs denoising processing on the first mesh and other meshes except the first mesh to obtain a target image. Specifically, the image portion corresponding to the first mesh is denoised using the exposure index corresponding to each of the first meshes, and the image portion corresponding to the other mesh is calculated using the calculated average value of each mesh ISO value. Perform denoising to get the target image.
  • the image processing method provided in this embodiment obtains the ISO value of the sensitivity of each mesh in the area covered by each object by identifying the objects included in the image to be processed and performing mesh division, and calculates each network in the coverage area of each object. Average value of the ISO value, statistically obtain each item And the number of the first grids in the body coverage area, when the preset number of grids is exceeded, using the exposure index corresponding to each of the first grids to perform image portions corresponding to the first grid Denoising, and de-noising the image portion corresponding to the other mesh except the first mesh using the exposure index corresponding to the average value to obtain a target image.
  • the embodiment of the present application avoids the problem that the entire image is denoised by only one exposure index to make the noise level in the bright and dark regions inconsistent, and the image denoising effect is better by local dynamic denoising.
  • FIG. 2 is a schematic flowchart diagram of an image processing method according to Embodiment 2 of the present application, where the image processing method includes:
  • Step 201 Identify objects included in the image to be processed and perform mesh division.
  • Step 202 Acquire an ISO value of sensitivity of each mesh in an area covered by each object.
  • Step 203 Calculate an average value of each grid ISO value in the coverage area of each object.
  • Step 204 Collect statistics on the number of the first grids in the coverage area of each object.
  • Step 205 If the number does not exceed the preset number of grids, obtain an exposure index corresponding to the average value corresponding to each object.
  • Step 206 Denoise each corresponding image portion by using the exposure index of each object to obtain a target image.
  • the number of the first grid does not exceed the preset number of networks, indicating that the image portion corresponding to the first grid in the image to be processed is small, and in order to improve the efficiency of image denoising processing, each object ISO may be used.
  • the exposure index corresponding to the average value denoises the corresponding image portions to obtain a target image.
  • the image processing method provided in this embodiment obtains the ISO value of the sensitivity of each mesh in the area covered by each object by identifying the objects included in the image to be processed and performing mesh division, and calculates each network in the coverage area of each object.
  • the average value of the ISO value is obtained by statistically obtaining the number of the first grids in the coverage area of each object. When the preset number of grids is not exceeded, the exposure index corresponding to the average value corresponding to each object is used.
  • the respective image portions are denoised using the exposure index of each object to obtain a target image.
  • the image may be denoised using the exposure index corresponding to the obtained ISO average value, which not only can obtain a better denoising effect, but also has a faster processing speed.
  • FIG. 3 it is a schematic flowchart of an image processing method according to Embodiment 3 of the present application, where the image processing method includes:
  • Step 301 Perform mesh division on the image to be processed.
  • Step 302 Acquire an ISO value of sensitivity of each grid.
  • Step 303 Select a corresponding exposure index according to the ISO value of each grid.
  • Step 304 Denoise the image by using the number of exposure values corresponding to each grid to obtain the target image.
  • the entire image to be processed is divided into a plurality of 64 ⁇ 48 meshes, and then the ISO value of each mesh is calculated. After the ISO value of each mesh is obtained, the corresponding value is selected according to the ISO value. Exposure index. After obtaining the exposure index of each grid, the image portion corresponding to each grid is denoised using the exposure index to obtain a target image.
  • the average value of the brightness of the entire image to be processed is not calculated, and each grid is separately denoised according to its own exposure index, thereby avoiding the denoising of the bright area in the picture to be too strong, resulting in blurring, dark areas. Denoising is too weak and the noise is large, so the denoising effect is more delicate.
  • FIG. 4 is a schematic structural diagram of an image processing apparatus according to Embodiment 4 of the present application.
  • the device includes: an identification division module 11, a first acquisition module 12, a calculation module 13, a second acquisition module 14, a third acquisition module 15, and a denoising module 16.
  • the segmentation module 11 is configured to identify objects included in the image to be processed and perform mesh division.
  • the first obtaining module 12 is configured to acquire the ISO value of the sensitivity of each mesh in the area covered by each object.
  • the calculation module 13 is configured to calculate an average value of each grid ISO value in the coverage area of each object.
  • the second obtaining module 14 is configured to statistically acquire the first grid of the coverage area of each object. number.
  • the first grid is any one of all grids in which the difference between the ISO value and the average value corresponding to the grid in each object coverage area exceeds a preset value.
  • the third obtaining module 15 is configured to obtain the number of exposure values corresponding to each of the first grids if the number exceeds a preset number of grids.
  • the denoising module 16 is configured to denoise the image portion corresponding to the first grid by using the exposure index corresponding to each of the first grids, and use the exposure index pair corresponding to the average value The image portion corresponding to the other mesh except the first mesh is denoised to obtain a target image.
  • the third obtaining module 15 is further configured to acquire the number of the exposure values corresponding to the average value corresponding to each object if the number does not exceed the number of the grids.
  • the denoising module 16 is further configured to denoise the corresponding image portions by using the exposure index of each object to obtain the target image.
  • the first obtaining module 12 is further configured to: after obtaining the ISO value of the sensitivity of each grid in the area covered by each object, select the corresponding exposure index according to the ISO value of each grid. .
  • the denoising module 16 is further configured to perform denoising on the image to obtain the target image by using the number of exposure values corresponding to each grid.
  • the identification and division module 11 is specifically configured to process an image by using a Hal Haar feature to acquire an object included in the image to be processed, and divide the image to be processed into a 64 ⁇ 48 grid book.
  • the function modules of the image processing apparatus provided in the embodiment can be used to execute the flow of the image processing method shown in FIG. 1 , FIG. 2 and FIG. 3 , and the specific working principle is not described again. For details, refer to the description of the method embodiment.
  • the image processing apparatus collects data of an object to be photographed by an image sensor, identifies each scene included in the object from the collected data, and acquires contour information corresponding to each scene, according to contour information of each scene. Divide the target into regions, adjust the distance between the image sensor and the target, obtain the optimal imaging position corresponding to each region, and extract the region image at the imaging position corresponding to each region, corresponding to each region. The area images are merged to obtain the target image of the target. In this embodiment, the target is divided into regions by the contour of the scene, and the clearest region map of each region is obtained. The final image of the synthetic target is taken out, making the final image sharper and sharper, and the scene inside the target can be clearly presented.
  • FIG. 5 is a schematic structural diagram of an image processing apparatus according to Embodiment 5 of the present application.
  • the image processing apparatus of the embodiment of the present application includes a memory 61, one or more processors 62, and one or more programs 63.
  • the one or more programs 63 when executed by one or more processors 62, perform any of the above-described embodiments.
  • the image processing apparatus of the embodiment of the present invention obtains the ISO value of the sensitivity of each mesh in the area covered by each object by identifying the objects included in the image to be processed and performing mesh division, and calculates each network in each coverage area of each object.
  • the average value of the ISO value is obtained by statistically obtaining the number of the first grids in the coverage area of each object. When the preset number of grids is exceeded, the exposure index pair corresponding to each of the first grids is used. Decoding the image portion corresponding to the first mesh, and denoising the image portion corresponding to the other mesh except the first mesh by using the exposure index corresponding to the average value to obtain a target image .
  • the embodiment of the present application avoids the problem that the entire image is denoised by only one exposure index to make the noise level in the light and dark areas inconsistent, and the image denoising effect is better by local dynamic denoising.
  • FIG. 6 is a schematic structural diagram of an embodiment of a computer program product for image processing according to Embodiment 6 of the present application.
  • the computer program product 71 for image processing of the embodiment of the present application may include a signal bearing medium 72.
  • Signal bearing medium 72 may include one or more instructions 73 that, when executed by, for example, a processor, may provide the functionality described above with respect to Figures 1-4.
  • the instruction 73 may include one or more instructions for identifying an object contained in the image to be processed and performing meshing; one for obtaining an ISO value of sensitivity of each mesh in an area covered by each object or a plurality of instructions; one or more instructions for calculating an average value of each grid ISO value in each object coverage area; for statistically obtaining the number of first grids in each object coverage area;
  • the first grid is one or more instructions for any one of the grids in which the difference between the ISO value and the average value corresponding to the grid in each object coverage area exceeds a preset value; Obtaining one or more instructions of the number of exposure values corresponding to each of the first grids if the number exceeds a preset number of grids; and for using each of the first grids Corresponding the exposure index denoising the image portion corresponding to the first grid, and using the exposure index corresponding to the average value to correspond to an image corresponding to the grid other than the first grid Partially performing denoising to obtain one or more instructions of the target image.
  • the exposure index
  • signal bearing medium 72 can include computer readable media 74 such as, but not limited to, a hard disk drive, a compact disk (CD), a digital versatile disk (DVD), a digital tape, a memory, and the like.
  • the signal bearing medium 72 can include a recordable medium 75 such as, but not limited to, a memory, a read/write (R/W) CD, an R/W DVD, and the like.
  • the signal bearing medium 72 can include a communication medium 76 such as, but not limited to, a digital and/or analog communication medium (eg, fiber optic cable, waveguide, wired communication link, wireless communication link, etc.).
  • the computer program product 71 can be transmitted by the RF signal bearing medium 72 to one or more modules of the identification device of the multi-finger swipe gesture, wherein the signal bearing medium 72 is comprised of a wireless communication medium (eg, wireless compliant with the IEEE 802.11 standard) Communication medium) transmission.
  • a wireless communication medium eg, wireless compliant with the IEEE 802.11 standard
  • the computer program product of the embodiment of the present application obtains the ISO value of the sensitivity of each mesh in the area covered by each object by identifying the objects contained in the image to be processed and performing mesh division, and calculating each network in each coverage area of each object.
  • the average value of the ISO value is obtained by statistically obtaining the number of the first grids in the coverage area of each object. When the preset number of grids is exceeded, the exposure index pair corresponding to each of the first grids is used. Decoding the image portion corresponding to the first mesh, and denoising the image portion corresponding to the other mesh except the first mesh by using the exposure index corresponding to the average value to obtain a target image .
  • the embodiment of the present application avoids the problem that the entire image is denoised by only one exposure index to make the noise level in the light and dark areas inconsistent, and the image denoising effect is better by local dynamic denoising.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供了图像处理方法及装置,通过识别待处理图像中包含的物体并进行网格划分,获取各物体所覆盖区域内每个网格的感光度ISO值,计算各物体覆盖区域内每个网格ISO值的平均值,统计获取各物体覆盖区内第一网格的个数,在超出预设的网格个数时,使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到目标图像。本申请实施例通过局部动态去噪使得图像的去噪效果较好。

Description

图像处理方法及装置
本专利申请要求申请日为2015年11月15日、申请号为2015107827289的中国专利申请的优先权,并将上述专利申请以引用的方式全文引入本文中。
技术领域
本申请实施例涉及电子技术领域,尤其涉及一种图像处理方法及装置。
背景技术
目前智能手机逐渐融入到了人们日常生活之中,不但成为日常通讯设备,也成为日常易于携带的娱乐设备。智能手机中相机的配置越来越高,如设置有大光圈、高像素以及光学防抖等性能,而且基于智能手机的便携性,用户越来越喜欢通过智能手机上的相机进行拍照。虽然不断地对智能手机上的相机进行配置优化,但是拍摄的图像上存在噪点仍然在所难免。
目前,基于安卓系统的手机可以使用高通算法对图像进行去噪,具体地将每张待捕捉的图像都分解成64×48的网格,计算所有这些网格的亮度平均值以确定曝光指数,借助亮区补偿,亮度值比平均值高出一定数量的较小区域会在确定曝光指数时被忽略。该高通算法根据不同的增益(gain)值将去噪参数分为6组即6个不同的区域,这6个区域通过互相插值覆盖最亮到最暗的所有场景。对于一张图片首先判定它的数码相机感光度量化规定(International Standardization Organization,简称ISO)是多少即曝光指数,然后根据对应区域的去噪参数进行去噪。高通算法只是通过整个图片去判断,再用统一的去噪参数进行去噪,对于图像中亮暗差异较大的部分其去噪效果较差。
发明内容
本申请实施例提供一种图像处理方法及装置,用于解决通过高通算法对图像进行去噪时存在当图像中亮暗差异较大的部分其去噪效果 较差的问题。
为了实现上述目的,本申请实施例提供了一种图像处理方法,包括:
识别待处理图像中包含的物体并进行网格划分;
获取各物体所覆盖区域内每个网格的感光度ISO值;
计算各物体覆盖区域内每个网格ISO值的平均值;
统计获取各物体覆盖区内第一网格的个数;其中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个;
如果所述个数超出预设的网格个数,获取每个所述第一网格对应的所述曝光值数;
使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到所述目标图像。
为了实现上述目的,本申请实施例提供了一种图像处理装置,包括:
识别划分模块,用于识别待处理图像中包含的物体并进行网格划分;
第一获取模块,用于获取各物体所覆盖区域内每个网格的感光度ISO值;
计算模块,用于计算各物体覆盖区域内每个网格ISO值的平均值;
第二获取模块,用于统计获取各物体覆盖区内第一网格的个数;其中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个;
第三获取模块,用于如果所述个数超出预设的网格个数,获取每个所述第一网格对应的曝光值数;
去噪模块,用于使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所 述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到目标图像。
另一方面,本申请实施例提供一种图像处理装置,包括存储器、一个或多个处理器以及一个或多个程序,其中,所述一个或多个程序在由所述一个或多个处理器执行时执行下述操作:识别待处理图像中包含的物体并进行网格划分;获取各物体所覆盖区域内每个网格的感光度ISO值;计算各物体覆盖区域内每个网格ISO值的平均值;统计获取各物体覆盖区内第一网格的个数;其中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个;如果所述个数超出预设的网格个数,获取每个所述第一网格对应的所述曝光值数;使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到所述目标图像。
另一方面,本申请实施例提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可执行指令,所述计算机可执行指令响应于执行使得图像处理装置执行操作,所述操作包括:识别待处理图像中包含的物体并进行网格划分;获取各物体所覆盖区域内每个网格的感光度ISO值;计算各物体覆盖区域内每个网格ISO值的平均值;统计获取各物体覆盖区内第一网格的个数;其中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个;如果所述个数超出预设的网格个数,获取每个所述第一网格对应的所述曝光值数;使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到所述目标图像。
本申请实施例的图像处理方法及装置,通过识别待处理图像中包含的物体并进行网格划分,获取各物体所覆盖区域内每个网格的感光度ISO值,计算各物体覆盖区域内每个网格ISO值的平均值,统计获 取各物体覆盖区内第一网格的个数,在超出预设的网格个数时,使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到目标图像。本申请实施例规避了原来对整张图像只用一个曝光指数去噪而使得亮暗处噪点水平不一致的问题,并且通过局部动态去噪使得图像的去噪效果较好。
附图说明
图1为本申请实施例一的图像处理方法的流程示意图;
图2为本申请实施例二的图像处理方法的流程示意图;
图3为本申请实施例三的图像处理方法的流程示意图;
图4为本申请实施例四的图像处理装置的结构示意图;
图5为本申请实施例五提供的图像处理装置的结构示意图;
图6为本申请实施例六提供的用于图像处理的计算机程序产品一个实施例的结构示意图。
具体实施方式
下面结合附图对本申请实施例提供的图像处理方法及装置进行详细描述。
实施例一
如图1所示,其为本申请实施例一的图像处理方法的流程示意图,该图像处理方法包括:
步骤101、识别待处理图像中包含的物体并进行网格划分。
在对待处理图像进行去噪之前,可以对图像进行物体识别。具体地,首先通过大量的具有比较明显的哈尔(Haar)特征的物体图像,使用模式识别的方法进行训练以得到分类器,基于该分类器通过haar特征可以识别出待处理图像中不同的物体。在识别出待处理图像中的不同物体后,可以将该图像划分成64×48的网格。
步骤102、获取各物体所覆盖区域内每个网格的感光度ISO值。
步骤103、计算各物体覆盖区域内每个网格ISO值的平均值。
步骤104、统计获取各物体覆盖区内第一网格的个数。
其中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个。
在计算出各物体覆盖区域内每个网格ISO值的平均值后,将各物体覆盖区域内每个网格的ISO值与平均值做差值,如果一个网格对应的差值超出预设值,将该网格成为第一网格。其中,第一网格为各物体覆盖区内网格对应的ISO值与平均值的差值超出预设值的所有网格中的任意一个。进一步地,统计各物体中第一网格的个数。
步骤105、如果所述个数超出预设的网格个数,获取每个所述第一网格对应的所述曝光值数。
将统计获取到各物体覆盖内的第一网格的个数与预设的网格个数进行比较,当这个个数超出了预设的网格个数,说明这些第一网格对应的图像部分与其他网格的亮度差异较大,需要获取每个第一网格对应的曝光指数,本实施例中,根据每个第一网格对应的ISO值选取相应的曝光指数,然后基于每个第一网格对应的曝光指数对相应的图像部分进行单独曝光。
步骤106、使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到所述目标图像。
为了使待处理图像的去噪效果较好,本实施例对第一网格和除第一网格之外的其他网格分别进行去噪处理,以得到目标图像。具体地,使用每个第一网格对应的曝光指数对第一网格对应的图像部分进行去噪,并且使用通过每个网格ISO值的计算出的平均值对其他网格对应的图像部分进行去噪,得到目标图像。
本实施例提供的图像处理方法,通过识别待处理图像中包含的物体并进行网格划分,获取各物体所覆盖区域内每个网格的感光度ISO值,计算各物体覆盖区域内每个网格ISO值的平均值,统计获取各物 体覆盖区内第一网格的个数,在超出预设的网格个数时,使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到目标图像。本申请实施例规避了原来对整张图像只用一个曝光指数去噪使得亮暗处噪点水平不一致的问题,并且通过局部动态去噪使得图像的去噪效果较好。
实施例二
如图2所示,其为本申请实施例二的图像处理方法的流程示意图,该图像处理方法包括:
步骤201、识别待处理图像中包含的物体并进行网格划分。
步骤202、获取各物体所覆盖区域内每个网格的感光度ISO值。
步骤203、计算各物体覆盖区域内每个网格ISO值的平均值。
步骤204、统计获取各物体覆盖区内第一网格的个数。
步骤201~步骤204中相关内容可参见上述实施例一中步骤101~步骤104中的记载,此处步骤赘述。
步骤205、如果所述个数未超出预设的网格个数,获取各物体对应的所述平均值对应的曝光指数。
步骤206、使用各物体的所述曝光指数对各自对应的图像部分进行去噪,得到目标图像。
进一步地,在第一网格的个数未超出预设的网络个数,说明待处理图像中第一网格对应的图像部分较少,为了提高图像去噪处理的效率,可以使用各物体ISO的平均值对应的曝光指数对各自对应的图像部分进行去噪,得到目标图像。
本实施例提供的图像处理方法,通过识别待处理图像中包含的物体并进行网格划分,获取各物体所覆盖区域内每个网格的感光度ISO值,计算各物体覆盖区域内每个网格ISO值的平均值,统计获取各物体覆盖区内第一网格的个数,在未超出预设的网格个数时,使用获取各物体对应的所述平均值对应的所述曝光指数,使用各物体的所述曝光指数对各自对应的图像部分进行去噪,得到目标图像。本实施例在 待处理图像的亮暗差异较少的情况下,可以使用获取到的ISO平均值对应的曝光指数对图像进行去噪处理,不仅可以获取较好的去噪效果,并且处理速度较快。
实施例三
如图3所示,其为本申请实施例三的图像处理方法的流程示意图,该图像处理方法包括:
步骤301、对待处理图像进行网格划分。
步骤302、获取每个网格的感光度ISO值。
步骤303、根据每个网格的所述ISO值选取相应的所述曝光指数。
步骤304、使用每个网格对应的所述曝光值数对所述图像进行去噪得到所述目标图像。
本实施例中,将整张待处理图像分为多个64×48的网格,然后计算每个网格的ISO值,在获取到每个网格的ISO值后,根据该ISO值选用相应的曝光指数。获取到每个网格的曝光指数后,使用该曝光指数对每个网格对应的图像部分进行去噪处理,以得到目标图像。本实施例中,不再进行计算整张待处理图像的亮度平均值,每个网格根据自己的曝光指数分别进行去噪,避免了图片中亮处区域去噪太强导致模糊,暗处区域去噪太弱噪点大,这样去噪效果更细腻。
实施例四
如图4所示,其为本申请实施例四的图像处理装置的结构示意图。该装置包括:识别划分模块11、第一获取模块12、计算模块13、第二获取模块14、第三获取模块15以及去噪模块16。
具体地,识别划分模块11,用于识别待处理图像中包含的物体并进行网格划分。
第一获取模块12,用于获取各物体所覆盖区域内每个网格的感光度ISO值。
计算模块13,用于计算各物体覆盖区域内每个网格ISO值的平均值。
第二获取模块14,用于统计获取各物体覆盖区内第一网格的个 数。其中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个。
第三获取模块15,用于如果所述个数超出预设的网格个数,获取每个所述第一网格对应的曝光值数。
去噪模块16,用于使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到目标图像。
进一步地,所述第三获取模块15,还用于如果所述个数未超出所述网格个数,获取各物体对应的所述平均值对应的所述曝光值数。
所述去噪模块16,还用于使用各物体的所述曝光指数对各自对应的图像部分进行去噪,得到所述目标图像。
进一步地,所述第一获取模块12,还用于在获取各物体所覆盖区域内每个网格的感光度ISO值之后,根据每个网格的所述ISO值选取相应的所述曝光指数。
所述去噪模块16,还用于使用每个网格对应的所述曝光值数对所述图像进行去噪得到所述目标图像。
进一步地,所述识别划分模块11,具体用于使用哈尔Haar特征对待处理图像,以获取到所述待处理图像中包含的物体,将所述待处理图像划分成64×48的网格本实施例提供的图像处理装置的各功能模块可用于执行图1、图2和图3所示的图像处理方法的流程,其具体工作原理不再赘述,详见方法实施例的描述。
本实施例提供的图像处理装置,通过图像传感器对待拍摄的目标进行数据采集,从所采集的数据中识别出目标中所包含的各景物,获取各景物对应的轮廓信息,根据各景物的轮廓信息对目标进行区域划分,调整图像传感器与目标之间的距离,获取每个区域对应的清晰度最佳的成像位置,抽取每个区域所对应的成像位置上的区域图像,将每个区域所对应的区域图像进行合并,得到目标的目标图像。本实施例通过景物轮廓对目标进行区域划分,将每个区域的最清晰的区域图 像取出合成目标的最终图像,使得最终成像更加清晰锐利,且目标内部的景物都可清晰呈现。
实施例五
图5为本申请实施例五提供的图像处理装置的结构示意图。如图5所示,本申请实施例的图像处理装置包括:存储器61、一个或多个处理器62以及一个或多个程序63。
其中,所述一个或多个程序63在由一个或多个处理器62执行时执行上述实施例中的任意一种方法。
本申请实施例的图像处理装置,通过识别待处理图像中包含的物体并进行网格划分,获取各物体所覆盖区域内每个网格的感光度ISO值,计算各物体覆盖区域内每个网格ISO值的平均值,统计获取各物体覆盖区内第一网格的个数,在超出预设的网格个数时,使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到目标图像。本申请实施例规避了原来对整张图像只用一个曝光指数去噪而使得亮暗处噪点水平不一致的问题,并且通过局部动态去噪使得图像的去噪效果较好。
实施例六
图6为本申请实施例六提供的用于图像处理的计算机程序产品一个实施例的结构示意图。如图6所示,本申请实施例的用于图像处理的计算机程序产品71,可以包括信号承载介质72。信号承载介质72可以包括一个或更多个指令73,该指令73在由例如处理器执行时,处理器可以提供以上针对图1-4描述的功能。例如,指令73可以包括:用于识别待处理图像中包含的物体并进行网格划分的一个或多个指令;用于获取各物体所覆盖区域内每个网格的感光度ISO值的一个或多个指令;用于计算各物体覆盖区域内每个网格ISO值的平均值的一个或多个指令;用于统计获取各物体覆盖区内第一网格的个数;其 中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个的一个或多个指令;用于如果所述个数超出预设的网格个数,获取每个所述第一网格对应的所述曝光值数的一个或多个指令;以及用于使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到所述目标图像的一个或多个指令。因此,例如,参照图4,图像处理装置可以响应于指令73来进行图1中所示的步骤中的一个或更多个。
在一些实现中,信号承载介质72可以包括计算机可读介质74,诸如但不限于硬盘驱动器、压缩盘(CD)、数字通用盘(DVD)、数字带、存储器等。在一些实现中,信号承载介质72可以包括可记录介质75,诸如但不限于存储器、读/写(R/W)CD、R/W DVD等。在一些实现中,信号承载介质72可以包括通信介质76,诸如但不限于数字和/或模拟通信介质(例如,光纤线缆、波导、有线通信链路、无线通信链路等)。因此,例如,计算机程序产品71可以通过RF信号承载介质72传送给多指滑动手势的识别装置的一个或多个模块,其中,信号承载介质72由无线通信介质(例如,符合IEEE 802.11标准的无线通信介质)传送。
本申请实施例的计算机程序产品,通过识别待处理图像中包含的物体并进行网格划分,获取各物体所覆盖区域内每个网格的感光度ISO值,计算各物体覆盖区域内每个网格ISO值的平均值,统计获取各物体覆盖区内第一网格的个数,在超出预设的网格个数时,使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到目标图像。本申请实施例规避了原来对整张图像只用一个曝光指数去噪而使得亮暗处噪点水平不一致的问题,并且通过局部动态去噪使得图像的去噪效果较好。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到各实施方式可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件。基于这样的理解,上述技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品可以存储在计算机可读存储介质中,如ROM/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行各个实施例或者实施例的某些部分所述的方法。
最后应说明的是:以上各实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述各实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分或者全部技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的范围。

Claims (8)

  1. 一种图像处理方法,其特征在于,包括:
    识别待处理图像中包含的物体并进行网格划分;
    获取各物体所覆盖区域内每个网格的感光度ISO值;
    计算各物体覆盖区域内每个网格ISO值的平均值;
    统计获取各物体覆盖区内第一网格的个数;其中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个;
    如果所述个数超出预设的网格个数,获取每个所述第一网格对应的所述曝光值数;
    使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到所述目标图像。
  2. 根据权利要求1所述的图像处理方法,其特征在于,还包括:
    如果所述个数未超出预设的网格个数,获取各物体对应的所述平均值对应的所述曝光值数;
    使用各物体的所述曝光指数对各自对应的图像部分进行去噪,得到目标图像。
  3. 根据权利要求1所述的图像处理方法,其特征在于,所述获取各物体所覆盖区域内每个网格的感光度ISO值之后,还包括:
    根据每个网格的所述ISO值选取相应的所述曝光指数;
    使用每个网格对应的所述曝光值数对所述图像进行去噪得到所述目标图像。
  4. 根据权利要求1-3任一所述的图像处理方法,其特征在于,所述对物体所覆盖的区域进行网格划分包括:
    使用哈尔Haar特征对待处理图像,以获取到所述待处理图像中包含的物体;
    将所述待处理图像划分成64×48的网格。
  5. 一种图像处理装置,其特征在于,包括:
    识别划分模块,用于识别待处理图像中包含的物体并进行网格划分;
    第一获取模块,用于获取各物体所覆盖区域内每个网格的感光度ISO值;
    计算模块,用于计算各物体覆盖区域内每个网格ISO值的平均值;
    第二获取模块,用于统计获取各物体覆盖区内第一网格的个数;其中,所述第一网格为各物体覆盖区内网格对应的所述ISO值与所述平均值的差值超出预设值的所有网格中的任意一个;
    第三获取模块,用于如果所述个数超出预设的网格个数,获取每个所述第一网格对应的曝光值数;
    去噪模块,用于使用每个所述第一网格对应的所述曝光指数对所述第一网格对应的图像部分进行去噪,以及使用所述平均值对应的所述曝光指数对除所述第一网格之外的其他网格对应的图像部分进行去噪,得到目标图像。
  6. 根据权利要求5所述的图像处理装置,其特征在于,所述第三获取模块,还用于如果所述个数未超出所述网格个数,获取各物体对应的所述平均值对应的所述曝光值数;
    所述去噪模块,还用于使用各物体的所述曝光指数对各自对应的图像部分进行去噪,得到所述目标图像。
  7. 根据权利要求6所述的图像处理装置,其特征在于,所述第一获取模块,还用于在获取各物体所覆盖区域内每个网格的感光度ISO值之后,根据每个网格的所述ISO值选取相应的所述曝光指数;
    所述去噪模块,还用于使用每个网格对应的所述曝光值数对所述图像进行去噪得到所述目标图像。
  8. 根据权利要求5-7任一项所述的图像处理装置,其特征在于,所述识别划分模块,具体用于使用哈尔Haar特征对待处理图像,以获取到所述待处理图像中包含的物体,将所述待处理图像划分成64×48的网格。
PCT/CN2016/089024 2015-11-15 2016-07-07 图像处理方法及装置 WO2017080236A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510782728.9 2015-11-15
CN201510782728.9A CN105898151A (zh) 2015-11-15 2015-11-15 图像处理方法及装置

Publications (1)

Publication Number Publication Date
WO2017080236A1 true WO2017080236A1 (zh) 2017-05-18

Family

ID=57001805

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/089024 WO2017080236A1 (zh) 2015-11-15 2016-07-07 图像处理方法及装置

Country Status (2)

Country Link
CN (1) CN105898151A (zh)
WO (1) WO2017080236A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754385A (zh) * 2019-03-26 2020-10-09 深圳中科飞测科技有限公司 数据点模型处理方法及系统、检测方法及系统、可读介质

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111131716B (zh) * 2019-12-31 2021-06-15 联想(北京)有限公司 图像处理方法以及电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100066868A1 (en) * 2008-09-12 2010-03-18 Canon Kabushiki Kaisha Image processing apparatus and method of processing image
CN104702824A (zh) * 2013-12-10 2015-06-10 佳能株式会社 摄像装置及摄像装置的控制方法
CN104902143A (zh) * 2015-05-21 2015-09-09 广东欧珀移动通信有限公司 一种基于分辨率的图像去噪方法及装置
CN105005973A (zh) * 2015-06-30 2015-10-28 广东欧珀移动通信有限公司 一种图像快速去噪的方法及装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5451285B2 (ja) * 2009-09-24 2014-03-26 キヤノン株式会社 画像処理装置、画像処理方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100066868A1 (en) * 2008-09-12 2010-03-18 Canon Kabushiki Kaisha Image processing apparatus and method of processing image
CN104702824A (zh) * 2013-12-10 2015-06-10 佳能株式会社 摄像装置及摄像装置的控制方法
CN104902143A (zh) * 2015-05-21 2015-09-09 广东欧珀移动通信有限公司 一种基于分辨率的图像去噪方法及装置
CN105005973A (zh) * 2015-06-30 2015-10-28 广东欧珀移动通信有限公司 一种图像快速去噪的方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111754385A (zh) * 2019-03-26 2020-10-09 深圳中科飞测科技有限公司 数据点模型处理方法及系统、检测方法及系统、可读介质

Also Published As

Publication number Publication date
CN105898151A (zh) 2016-08-24

Similar Documents

Publication Publication Date Title
US10949952B2 (en) Performing detail enhancement on a target in a denoised image
WO2018176938A1 (zh) 红外光斑中心点提取方法、装置和电子设备
KR102480245B1 (ko) 패닝 샷들의 자동 생성
EP3480784B1 (en) Image processing method, and device
Zhuo et al. Defocus map estimation from a single image
US10410327B2 (en) Shallow depth of field rendering
US9836855B2 (en) Determining a depth map from images of a scene
JP2020536457A (ja) 画像処理方法および装置、電子機器、ならびにコンピュータ可読記憶媒体
Hajisharif et al. Adaptive dualISO HDR reconstruction
US20110280475A1 (en) Apparatus and method for generating bokeh effect in out-focusing photography
JP5779089B2 (ja) エッジ検出装置、エッジ検出プログラム、およびエッジ検出方法
JP7212554B2 (ja) 情報処理方法、情報処理装置、及びプログラム
JP5725194B2 (ja) 夜景画像ボケ検出システム
JP6720845B2 (ja) 画像処理装置、画像処理方法及びプログラム
KR102360773B1 (ko) 스테레오-시간적 이미지 시퀀스들로부터 향상된 3-d 데이터 재구성을 위한 방법들 및 장치
CN110942427A (zh) 一种图像降噪的方法及装置、设备、存储介质
US9538074B2 (en) Image processing apparatus, imaging apparatus, and image processing method
Chen et al. Weighted sparse representation and gradient domain guided filter pyramid image fusion based on low-light-level dual-channel camera
CN108234826B (zh) 图像处理方法及装置
CN113673474B (zh) 图像处理方法、装置、电子设备及计算机可读存储介质
WO2017080236A1 (zh) 图像处理方法及装置
US11727540B2 (en) Image sharpening
Chen et al. Hybrid saliency detection for images
JP2019160297A (ja) 画像信号から階段状アーチファクトを低減するための画像処理装置
US11205064B1 (en) Measuring quality of depth images in real time

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16863414

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16863414

Country of ref document: EP

Kind code of ref document: A1