WO2016054904A1 - 图像处理方法、图像处理装置及显示设备 - Google Patents

图像处理方法、图像处理装置及显示设备 Download PDF

Info

Publication number
WO2016054904A1
WO2016054904A1 PCT/CN2015/076938 CN2015076938W WO2016054904A1 WO 2016054904 A1 WO2016054904 A1 WO 2016054904A1 CN 2015076938 W CN2015076938 W CN 2015076938W WO 2016054904 A1 WO2016054904 A1 WO 2016054904A1
Authority
WO
WIPO (PCT)
Prior art keywords
feature
image
image processing
scene
color space
Prior art date
Application number
PCT/CN2015/076938
Other languages
English (en)
French (fr)
Inventor
张晓�
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Priority to EP15762465.1A priority Critical patent/EP3206185B1/en
Priority to US14/777,851 priority patent/US20160293138A1/en
Publication of WO2016054904A1 publication Critical patent/WO2016054904A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G5/00Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators
    • G09G5/02Control arrangements or circuits for visual indicators common to cathode-ray tube indicators and other visual indicators characterised by the way in which colour is displayed
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/435Computation of moments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0613The adjustment depending on the type of the information to be displayed
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0626Adjustment of display parameters for control of overall brightness
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/066Adjustment of display parameters for control of contrast
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2320/00Control of display operating conditions
    • G09G2320/06Adjustment of display parameters
    • G09G2320/0666Adjustment of display parameters for control of colour parameters, e.g. colour temperature
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2340/00Aspects of display data processing
    • G09G2340/06Colour space transformation
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09GARRANGEMENTS OR CIRCUITS FOR CONTROL OF INDICATING DEVICES USING STATIC MEANS TO PRESENT VARIABLE INFORMATION
    • G09G2360/00Aspects of the architecture of display systems
    • G09G2360/16Calculation or use of calculated indices related to luminance levels in display data

Definitions

  • the present invention relates to the field of display technologies, and in particular, to an image processing method, an image processing apparatus, and a display device.
  • the image may be affected by factors such as the dynamic range of the imaging device and the intensity of the ambient light during the acquisition process, the image has low contrast, the image information is not obvious, the color is distorted, the target contour or the boundary information is not clear enough. It brings difficulties to human visual observation and machine analysis processing, so it is necessary to enhance the image.
  • Image enhancement refers to the processing of highlighting certain information of an image according to a specific need, while weakening or removing some unwanted information, thereby improving the visual effect of the image, providing an intuitive, clear and suitable image for analysis.
  • image enhancement includes three aspects of contrast enhancement, image sharpening and noise filtering.
  • the contrast enhancement is used to improve the visibility of the image, highlighting information hidden by illumination, exposure, and the like.
  • image sharpening is used to improve the sharpness of the target object, for example, highlighting contours or boundary information, making the target object easier to detect and recognize.
  • the noise filtering is used to attenuate the effects of noise caused by image imaging and transmission.
  • the existing image processing method adjusts the brightness and chromaticity of an image in a uniform manner to improve the contrast and saturation of the image.
  • people are cognizant of the scene of the image. Due to the lack of pertinence of the existing image processing methods, the processed image and the human eye are deviated from the image. Therefore, the existing image processing method has limitations on the improvement of image quality.
  • the present invention provides an image processing method, an image processing apparatus, and a display device for solving the problem that the image processing method in the prior art lacks specificity and has limitations on image image quality improvement.
  • the present invention provides an image processing method comprising: identifying from an original image Demyimating at least one scene; determining an enhancement method corresponding to the scene; performing image processing on the corresponding scene by the enhancement method to obtain an enhanced image.
  • the determining an enhancement method corresponding to the scenario includes: extracting feature information from the scenario; matching the feature information with a feature value in a feature database; if the feature information and feature database The eigenvalues in the matching are successfully matched, and the category of the scene is determined according to the matched eigenvalues; and the enhancement method corresponding to the category of the scene is queried from the enhanced method database to determine an enhancement method corresponding to the scene.
  • the step of recognizing at least one scene from the original image comprises: converting the original image from a first color space to a second color space, where the first color space includes three colors: red, green, and blue. a component, the second color space comprising one luminance component and two chrominance components.
  • the step of identifying at least one scene from the original image further comprises: dividing the original image into multiple scenes in the second color space.
  • the feature information includes a color feature, a texture feature, and a transform domain feature
  • the extracting the feature information from the scenario includes: extracting the color feature from the two chroma components;
  • the texture feature and the transform domain feature are extracted from the luminance component.
  • the step of performing image processing on the corresponding scene by the enhancement method to obtain the enhanced image comprises: converting the enhanced image from the second color space to the first color space.
  • the present invention further provides an image processing apparatus, comprising: an identification unit, configured to identify at least one scene from the original image; a determining unit, configured to determine an enhancement method corresponding to the scene; and a processing unit, And performing image processing on the corresponding scene by the enhancement method to obtain an enhanced image.
  • the determining unit includes: an extracting module, configured to extract feature information from the scene; a matching module, configured to match the feature information with a feature value in the feature database; and determine a module, where When the feature information is successfully matched with the feature value in the feature database, the category of the scenario is determined according to the matched feature value; and the query module is configured to query, from the enhanced method database, an enhancement method corresponding to the category of the scenario. To determine an enhancement method corresponding to the scene.
  • the identifying unit includes: a first converting module, configured to convert the original image from a first color space to a second color space, where the first color space includes three components of red, green, and blue.
  • the second color space includes one luminance component and two chrominance components.
  • the identifying unit further includes: a segmentation module, configured to divide the original image into multiple scenes in the second color space.
  • the feature information includes a color feature, a texture feature, and a transform domain feature
  • the extracting module includes: a first extracting submodule, configured to extract the color feature from the two chroma components; An extraction submodule for extracting the texture feature and the transform domain feature from the luminance component.
  • the processing unit includes: a second conversion module, configured to convert the enhanced image from the second color space to the first color space.
  • the present invention also provides a display device comprising the image processing device of any of the above.
  • a plurality of scenes are identified from the original image, an enhancement method corresponding to the scene is determined, and image processing is performed on the corresponding scene by the enhancement method. Get an enhanced image.
  • different image processing methods are used in a targeted manner, so that the processed image is more in line with the human eye's recognition of the image, thereby achieving the best image display effect.
  • FIG. 1 is a flow chart of an image processing method of the present invention
  • FIG. 2 is a schematic structural view of an image processing apparatus according to the present invention.
  • FIG. 1 is a flow chart of an image processing method of the present invention. As shown in FIG. 1, the method includes:
  • Step 101 Identify at least one scene from the original image.
  • the step 101 includes: converting the original image from a first color space to a second color space, where the first color space includes three components of red, green, and blue, and the second color space includes A luminance component and two chrominance components for describing grayscale information of an image, the two chrominance components being used to describe color and saturation information.
  • the image information collected by the image acquisition device is information describing each pixel point of the image in the first color space, and the image may be removed in order to avoid loss of image information during image processing. Convert from the first color space to the second color space.
  • the original image is segmented into a plurality of scenes in the second color space.
  • One of the purposes of scene segmentation of the original image in the second color space is to identify different scenes in the original image. Only by first identifying different scenes in the original image, it is possible to use different image processing methods according to the characteristics of different scenes, so that the processed image conforms to the human eye's perception of the image, thereby achieving the best image. display effect. It can be understood that, from the viewpoint of saving computation, only a specific scene can be identified. For example, the scene is a scene that has a greater visual impact when viewed by the user, and then enhanced for the scene.
  • Step 102 Determine an enhancement method corresponding to the scenario.
  • feature information may be extracted from the scene, and the feature information may include a color, a shape, a texture, and a spatial relationship.
  • the color feature information is mainly extracted in the two chrominance components, and the shape feature information, the texture feature information, and the spatial relationship feature information are mainly extracted from the luminance component.
  • the feature information includes a color feature, a texture feature, and a transform domain feature
  • the color feature is extracted from the two chroma components
  • the texture feature and the transform domain feature are extracted from the luma component.
  • the texture feature includes seven features:
  • the second moment of the angle represents the sum of the squares of the elements in the gray level co-occurrence matrix, also known as energy.
  • the second moment of the angle is used to measure the uniformity of the texture gray scale change of the image to reflect the uniformity of the gray scale distribution of the image and the texture thickness.
  • p(i,j) represents the gradation of a two-dimensional image at (i,j) points
  • usually the gradation of the image is represented by 256 levels
  • L 1, 2,...,256
  • n is the difference between the position of the row and the position of the column.
  • the contrast is used to reflect the sharpness of the image and the depth of the texture groove. The deeper the texture groove, the greater the contrast, the clearer the image of the image is displayed; the shallower the texture groove, the smaller the contrast, the more blurred the image of the image.
  • the entropy is a description of the randomness of the texture of the image, reflecting the degree of non-uniformity and complexity of the texture in the image.
  • the inverse moment is used to measure the change of the local texture of the image, and the larger the value of the inverse moment, the more uniform the local texture of the image and the smaller the change.
  • the transform domain feature in the feature information may be extracted from the scene.
  • the transform domain feature is obtained by a Gabor transform which is developed on the basis of the Fourier transform. The essence is that a time window function is added to the Fourier transform to give the signal spectrum. Change the signal.
  • the window function is a Gaussian function
  • the Fourier transform becomes a Gabor transform. Extracting the original image using the Gabor transform
  • the transform domain feature is implemented by convolving the original image with a Gabor filter comprising a Gabor sub-band filter, the Gabor transform comprising a Gabor wavelet transform.
  • an original image f(x, y) is given (where f(x, y) is a gray value at a (x, y) pixel position)
  • the Gabor wavelet transform of the original image can be expressed as :
  • g mn (x, y) is a Gabor sub-band filter bank of different scales and different directions, where m is the series of scales and n is the direction.
  • the Gabor transform sub-band image of the original image can be obtained.
  • a filter bank having 24 Gabor sub-band filters composed of a 3-level scale and 8 directions is used, and a transform domain feature composed of 48 eigenvectors can be obtained by using the Gabor sub-band filter bank.
  • the feature information is matched with the feature value in the feature database. If the feature information matches the feature value in the feature database, the category of the scene is determined according to the matched feature value.
  • An enhancement method corresponding to the category of the scene is queried in the enhanced method database.
  • the feature database is a database established by feature values of a plurality of scenarios.
  • the scene in this embodiment may be divided according to the scene, and the scene includes, for example, sky, water surface, vegetation, snow, buildings, and the like. Of course, in practical applications, other scenarios can also be divided into other scenarios.
  • the enhanced method database is a database established by an enhanced method corresponding to different scenarios.
  • the enhancement method includes a method of processing such as contrast enhancement, image denoising, edge sharpening, color enhancement, and the like.
  • the image is processed in a targeted manner by using an enhancement method corresponding to the scene (for example, using a color enhancement method to enhance vegetation, using an edge sharpening method to process a building, etc.), so that the processed image is more in line with the human eye. Cognition to achieve the best image display.
  • an enhancement method corresponding to the scene for example, using a color enhancement method to enhance vegetation, using an edge sharpening method to process a building, etc.
  • Step 103 Perform image processing on the corresponding scene by using the enhancement method to obtain an enhanced image.
  • the enhanced image is obtained by performing targeted processing on the original image. Since the display system generally adopts the first color space, it is also necessary to convert the enhanced image to the second color space, thereby realizing display of the image.
  • a plurality of scenes are identified from the original image, and an enhancement method corresponding to the scene is determined, and the corresponding enhancement method is used to correspond to The scene is image processed to obtain an enhanced image.
  • different image processing methods are used in a targeted manner, so that the processed image is more in line with the human eye's recognition of the image, thereby achieving the best image display effect.
  • the image processing apparatus includes an identification unit 201, a determination unit 202, and a processing unit 203.
  • the identifying unit 201 is configured to identify a plurality of scenes from the original image
  • the determining unit 202 is configured to determine an enhancement method corresponding to the scene
  • the processing unit 203 is configured to use the enhancement method to correspond to the corresponding
  • the scene is image processed to obtain an enhanced image. It can be understood that, from the viewpoint of saving the calculation amount, the identification unit 201 can identify only a specific scene in the original image, for example, this scene is a scene that has a great influence on the visual when the user views, and then targets the scene. Enhance.
  • the identifying unit 201 includes a first converting module 301, configured to convert the original image from a first color space to a second color space, where the first color space includes red Three components, green and blue, the second color space includes a luminance component and two chrominance components, the luminance component is used to describe grayscale information of an image, and the two chrominance components are used to describe color and Saturation information.
  • the image information collected by the image acquisition device is information describing each pixel point of the image in the first color space. To avoid loss of image information during image processing, the image may be The first color space is converted to the second color space.
  • the identification unit 201 further includes a segmentation module 302 for segmenting the original image into a plurality of scenes in the second color space.
  • scene segmentation is performed on the original image in the second color space, and one of the purposes is to identify different scenes in the original image. Only by first identifying different scenes in the original image, it is possible to use different image processing methods according to the characteristics of different scenes, so that the processed image is more in line with the human eye's perception of the image, thereby achieving the best.
  • the image is displayed.
  • the determining unit 202 includes an extracting module 303, a matching module 304, a determining module 305, and a query module 306.
  • the extracting module 303 is configured to extract a feature letter from the scene
  • the feature information may include a color, a shape, a texture, and a spatial relationship.
  • the color feature information is mainly extracted in the two chrominance components, and the shape feature information, the texture feature information, and the spatial relationship feature information are mainly extracted from the luminance component.
  • the feature information includes a color feature, a texture feature, and a transform domain feature
  • the extraction module 303 includes a first extraction submodule and a second extraction submodule.
  • the first extraction sub-module is configured to extract the color feature from the two chrominance components
  • the second extraction sub-module is configured to extract the texture feature and the transform domain feature from the luminance component.
  • the matching module 304 is configured to match the feature information with the feature value in the feature database
  • the determining module 305 is configured to: when the feature information matches the feature value in the feature database
  • the category of the scene is determined according to the matched feature value
  • the query module 306 is configured to query, from the enhanced method database, an enhancement method corresponding to the category of the scene.
  • the feature database is a database established by feature values of a plurality of scenarios.
  • the scene in this embodiment may be divided according to the scene, and the scene includes, for example, sky, water surface, vegetation, snow, buildings, and the like. Of course, in practical applications, other scenarios can also be divided into other scenarios.
  • the enhanced method database is a database established by an enhanced method corresponding to different scenarios.
  • the enhancement method includes a method of processing such as contrast enhancement, image denoising, edge sharpening, color enhancement, and the like.
  • the image is processed in a targeted manner by using an enhancement method corresponding to the scene, so that the processed image is more in line with the human eye's recognition of the image, thereby achieving an optimal image display effect.
  • the processing unit 203 includes a second conversion module 307, and the second conversion module 307 is configured to convert the enhanced image from the second color space to the first color space.
  • the enhanced image is obtained by performing targeted processing on the original image. Since the display system generally adopts the first color space, it is also necessary to convert the enhanced image to the second color space, thereby realizing display of the image.
  • a plurality of scenes are identified from the original image, and an enhancement method corresponding to the scene is determined, and the corresponding scene is subjected to image processing by the enhancement method to obtain an enhanced image.
  • different image processing methods are used in a targeted manner, so that the processed image is more in line with the human eye's recognition of the image. Know, so as to achieve the best image display.
  • the embodiment provides a display device, which includes the image processing device provided in the second embodiment.
  • a display device which includes the image processing device provided in the second embodiment.
  • a plurality of scenes are identified from the original image, and an enhancement method corresponding to the scene is determined, and the corresponding scene is subjected to image processing by the enhancement method to obtain an enhanced image.
  • different image processing methods are used in a targeted manner, so that the processed image is more in line with the human eye's recognition of the image, thereby achieving the best image display effect.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computer Hardware Design (AREA)
  • Data Mining & Analysis (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本发明提供一种图像处理方法、图像处理装置及显示设备,所述图像处理方法包括从原始图像中识别出多个场景,确定出与所述场景对应的增强方法,通过所述增强方法对对应的场景进行图像处理以获得增强图像。根据不同场景的特点,有针对性的采用不同的图像处理方法,使得处理后的图像符合人眼对图像的认知,从而达到最佳的图像显示效果。

Description

图像处理方法、图像处理装置及显示设备 技术领域
本发明涉及显示技术领域,尤其涉及一种图像处理方法、图像处理装置及显示设备。
背景技术
由于图像在获取的过程中可能会受到成像设备动态范围大小、环境光线强弱等因素的影响,导致图像出现对比度较低、图像信息不明显、颜色失真、目标轮廓或者边界信息清晰度不够等现象,给人类视觉观察和机器分析处理带来困难,因此需要对图像进行增强处理。
图像增强是指按特定的需要突出图像的某些信息,同时削弱或去除某些不需要的信息的处理方法,从而改善图像的视觉效果,提供直观、清晰、适合于分析的图像。通常图像增强包括对比度增强、图像锐化和噪声滤波三个方面的内容。所述对比度增强用来提高图像的可视度,将因光照、曝光等原因隐藏的信息凸显出来。所述图像锐化用来提高目标物体的清晰度,例如,突出轮廓或者边界信息,使目标物体更加容易进行检测和识别。所述噪声滤波用来削弱图像成像和传输等过程中带来的噪声影响。
现有的图像处理方法对图像的亮度和色度按照统一的方式进行调整来提高图像的对比度、饱和度。然而,人对图像的场景是有认知的,由于现有的图像处理方法缺乏针对性,导致处理后的图像与人眼对图像的认知出现偏差。因此,现有的图像处理方法对图像画面质量的提高具有局限性。
发明内容
为解决上述问题,本发明提供一种图像处理方法、图像处理装置及显示设备,用于解决现有技术中图像处理方法缺乏针对性导致对图像画面质量的提高具有局限性的问题。
为此,本发明提供一种图像处理方法,包括:从原始图像中识别 出至少一个场景;确定出与所述场景对应的增强方法;通过所述增强方法对对应的场景进行图像处理,以获得增强图像。
可选的,所述确定出与所述场景对应的增强方法包括:从所述场景中提取特征信息;将所述特征信息与特征数据库中的特征值进行匹配;若所述特征信息与特征数据库中的特征值匹配成功,根据匹配的特征值确定出所述场景的类别;从增强方法数据库中查询出与所述场景的类别对应的增强方法,以确定出与所述场景对应的增强方法。
可选的,所述从原始图像中识别出至少一个场景的步骤包括:将所述原始图像从第一颜色空间转换到第二颜色空间,所述第一颜色空间包括红、绿、蓝三个分量,所述第二颜色空间包括一个亮度分量和两个色度分量。
可选的,所述从原始图像中识别出至少一个场景的步骤还包括:在所述第二颜色空间中将所述原始图像分割成多个场景。
可选的,所述特征信息包括颜色特征、纹理特征和变换域特征,所述从所述场景中提取特征信息的步骤包括:从所述两个色度分量中提取所述颜色特征;从所述亮度分量中提取所述纹理特征和变换域特征。
可选的,所述通过所述增强方法对对应的场景进行图像处理,以获得增强图像的步骤包括:将所述增强图像从第二颜色空间转换到第一颜色空间。
本发明还提供一种图像处理装置,其特征在于,包括:识别单元,用于从原始图像中识别出至少一个场景;确定单元,用于确定出与所述场景对应的增强方法;处理单元,用于通过所述增强方法对对应的场景进行图像处理,以获得增强图像。
可选的,所述确定单元包括:提取模块,用于从所述场景中提取特征信息;匹配模块,用于将所述特征信息与特征数据库中的特征值进行匹配;确定模块,用于当所述特征信息与特征数据库中的特征值匹配成功时,根据匹配的特征值确定出所述场景的类别;查询模块,用于从增强方法数据库中查询出与所述场景的类别对应的增强方法,以确定出与所述场景对应的增强方法。
可选的,所述识别单元包括:第一转换模块,用于将所述原始图像从第一颜色空间转换到第二颜色空间,所述第一颜色空间包括红、绿、蓝三个分量,所述第二颜色空间包括一个亮度分量和两个色度分量。
可选的,所述识别单元还包括:分割模块,用于在所述第二颜色空间中将所述原始图像分割成多个场景。
可选的,所述特征信息包括颜色特征、纹理特征和变换域特征,所述提取模块包括:第一提取子模块,用于从所述两个色度分量中提取所述颜色特征;第二提取子模块,用于从所述亮度分量中提取所述纹理特征和变换域特征。
可选的,所述处理单元包括:第二转换模块,用于将所述增强图像从第二颜色空间转换到第一颜色空间。
本发明还提供一种显示设备,其特征在于,包括上述任一所述的图像处理装置。
本发明具有下述有益效果:
在本发明提供的图像处理装置、图像处理方法、显示设备中,从原始图像中识别出多个场景,确定出与所述场景对应的增强方法,通过所述增强方法对对应的场景进行图像处理以获得增强图像。根据不同场景的特点,有针对性的采用不同的图像处理方法,使得处理后的图像更加符合人眼对图像的认知,从而达到最佳的图像显示效果。
附图说明
图1为本发明的一种图像处理方法的流程图;
图2为本发明的一种图像处理装置的结构示意图。
具体实施方式
为使本领域的技术人员更好地理解本发明的技术方案,下面结合附图对本发明提供的图像处理方法、图像处理装置及显示设备进行详细描述。
实施例一
图1为本发明的一种图像处理方法的流程图。如图1所示,所述方法包括:
步骤101、从原始图像中识别出至少一个场景。
可选的,所述步骤101包括:将所述原始图像从第一颜色空间转换到第二颜色空间,所述第一颜色空间包括红、绿、蓝三个分量,所述第二颜色空间包括一个亮度分量和两个色度分量,所述亮度分量用于描述图像的灰阶信息,所述两个色度分量用来描述色彩及饱和度信息。在实际应用中,图像采集设备采集到的图像信息是通过在所述第一颜色空间中对图像的各个像素点进行描述的信息,为了避免在图像处理过程中出现图像信息的丢失,可以将图像从第一颜色空间转换到第二颜色空间。
本实施例中,在所述第二颜色空间中将所述原始图像分割成多个场景。在所述第二颜色空间中对所述原始图像进行场景分割,目的之一是为了识别出所述原始图像中的不同场景。只有首先识别出原始图像中的不同场景,才可能根据不同场景的特点,有针对性的采用不同的图像处理方法,使得处理后的图像符合人眼对图像的认知,从而达到最佳的图像显示效果。可以理解,从节省计算量的角度考虑,可以只识别出特定的某一个场景,例如,这个场景是用户观看时对视觉影响较大的一个场景,然后针对该场景进行增强。
步骤102、确定出与所述场景对应的增强方法。
本实施例中,可以从所述场景中提取特征信息,所述特征信息可以包括颜色、形状、纹理、空间关系。其中,颜色特征信息主要在所述两个色度分量中进行提取,而形状特征信息、纹理特征信息以及空间关系特征信息主要从所述亮度分量提取。
优选的,所述特征信息包括颜色特征、纹理特征和变换域特征,从所述两个色度分量中提取所述颜色特征,从所述亮度分量中提取所述纹理特征和变换域特征。
本实施例中,所述纹理特征包括7个特征:
1)角二阶矩
Figure PCTCN2015076938-appb-000001
所述角二阶矩表示灰度共生矩阵中各个元素的平方和,又称为能量。所述角二阶矩用于度量图像的纹理灰度变化的均一性,以反映图像灰度分布的均匀程度和纹理粗细度。其中p(i,j)表示一幅二维图像在(i,j)点的灰度,通常将图像的灰度用256级来表示,L=1,2,…,256
2)对比度
Figure PCTCN2015076938-appb-000002
n为行的位置与列的位置的差。
所述对比度用于反映图像的清晰度和纹理沟深浅的程度。纹理沟越深,所述对比度越大,图像的画面显示越清晰;纹理沟越浅,所述对比度越小,图像的画面显示越模糊。
3)相关性
Figure PCTCN2015076938-appb-000003
其中,
Figure PCTCN2015076938-appb-000004
Figure PCTCN2015076938-appb-000005
4)熵
Figure PCTCN2015076938-appb-000006
所述熵是对图像的纹理的随机性的描述,反映了图像中纹理的非均匀程度和复杂度。
5)方差
Figure PCTCN2015076938-appb-000007
其中,
Figure PCTCN2015076938-appb-000008
是p(i,j)的均值。
6)逆差矩
Figure PCTCN2015076938-appb-000009
所述逆差矩用于度量图像的局部纹理的变化,所述逆差矩的数值越大,说明图像的局部纹理越均匀、变化越小。
7)第一平均相关信息
Figure PCTCN2015076938-appb-000010
第二平均相关信息
f8={1-exp[-0.2(HXY2-HXY)]}1/2
其中,
Figure PCTCN2015076938-appb-000011
Figure PCTCN2015076938-appb-000012
Figure PCTCN2015076938-appb-000013
Figure PCTCN2015076938-appb-000014
Figure PCTCN2015076938-appb-000015
从所述亮度分量中提取所述纹理特征的上述7个具体特征,以便更加精准的表述所述纹理特征,从而更加准确地确定出与所述场景对应的增强方法。
本实施例中,可以从所述场景中提取特征信息中的变换域特征。所述变换域特征由Gabor变换获得,所述Gabor变换是在傅里叶变换的基础上发展来的,实质是在傅里叶变换中加了一个表示时间的窗口函数来给出信号谱的时变信号。所述窗口函数为高斯函数时,所述傅里叶变换就变成了Gabor变换。利用所述Gabor变换对原始图像提取 变换域特征是通过将原始图像与Gabor滤波器进行卷积来实现的,所述Gabor滤波器包括Gabor子带滤波器,所述Gabor变换包括Gabor小波变换。当给定一幅原始图像f(x,y)(其中,f(x,y)为在(x,y)像素位置上的灰度值)时,所述原始图像的Gabor小波变换可以表示为:
wmn(x,y)=f(x,y)*gmn(x,y)
其中,*表示卷积,gmn(x,y)为不同尺度和不同方向的Gabor子带滤波器组,其中m为尺度的级数,n为方向。当尺度和方向给定时,可以得到原始图像的Gabor变换子带图像。本实施例采用3级尺度和8个方向组成的有24个Gabor子带滤波器的滤波器组,利用上述Gabor子带滤波器组可以获得由48个特征向量组成的变换域特征。
本实施例中,将所述特征信息与特征数据库中的特征值进行匹配,若所述特征信息与特征数据库中的特征值匹配成功,则根据匹配的特征值确定出所述场景的类别,从增强方法数据库中查询出与所述场景的类别对应的增强方法。所述特征数据库是由多个场景的特征值建立的数据库。本实施例中的场景可以依据景物进行划分,所述场景包括例如天空,水面,植被,白雪,建筑物等。当然,在实际应用中,也可以以其他依据划分为其他场景。所述增强方法数据库是由不同场景对应的增强方法建立的数据库。所述增强方法包括对比度增强、图像去噪、边缘锐化、颜色增强等处理方法。利用与所述场景对应的增强方法有针对性地对图像进行处理(例如,使用颜色增强方法对植被增强,使用边缘锐化方法处理建筑物等),使得处理后的图像更加符合人眼对图像的认知,从而达到最佳的图像显示效果。
步骤103、通过所述增强方法对对应的场景进行图像处理,以获得增强图像。
本实施例中,通过对原始图像进行针对性的处理,得到增强后的图像。由于显示系统一般采用第一颜色空间,因此还需要将所述增强图像转换到第二颜色空间,从而实现图像的显示。
本实施例提供的图像处理方法中,从原始图像中识别出多个场景,确定出与所述场景对应的增强方法,通过所述对应的增强方法对对应 的场景进行图像处理以获得增强图像。根据不同场景的特点,有针对性的采用不同的图像处理方法,使得处理后的图像更加符合人眼对图像的认知,从而达到最佳的图像显示效果。
实施例二
图2为本发明的一种图像处理装置的结构示意图。如图2所示,所述图像处理装置包括:识别单元201、确定单元202和处理单元203。所述识别单元201用于从原始图像中识别出多个场景,所述确定单元202用于确定出与所述场景对应的增强方法,所述处理单元203用于通过所述增强方法对对应的场景进行图像处理,以获得增强图像。可以理解,从节省计算量的角度考虑,所述识别单元201可以只识别出原始图像中特定的某一个场景,例如,这个场景是用户观看时对视觉影响较大的一个场景,然后针对该场景进行增强。
可选的,所述识别单元201包括第一转换模块301,所述第一转换模块301用于将所述原始图像从第一颜色空间转换到第二颜色空间,所述第一颜色空间包括红、绿、蓝三个分量,所述第二颜色空间包括一个亮度分量和两个色度分量,所述亮度分量用于描述图像的灰阶信息,所述两个色度分量用来描述色彩及饱和度信息。在实际应用中,图像采集设备采集到的图像信息是在所述第一颜色空间中对图像的各个像素点进行描述的信息,为了避免在图像处理过程中出现图像信息的丢失,可以将图像从第一颜色空间转换到第二颜色空间。
所述识别单元201还包括分割模块302,所述分割模块302用于在所述第二颜色空间中将所述原始图像分割成多个场景。本实施例中,在所述第二颜色空间中对所述原始图像进行场景分割,目的之一是为了识别出所述原始图像中的不同场景。只有首先识别出原始图像中的不同场景,才可能根据不同场景的特点,有针对性的采用不同的图像处理方法,使得处理后的图像更加符合人眼对图像的认知,从而达到最佳的图像显示效果。
所述确定单元202包括:提取模块303、匹配模块304、确定模块305和查询模块306。所述提取模块303用于从所述场景中提取特征信 息,所述特征信息可以包括颜色、形状、纹理、空间关系。其中,颜色特征信息主要在所述两个色度分量中进行提取,而形状特征信息、纹理特征信息以及空间关系特征信息主要从所述亮度分量提取。优选的,所述特征信息包括颜色特征、纹理特征和变换域特征,所述提取模块303包括第一提取子模块和第二提取子模块。所述第一提取子模块用于从所述两个色度分量中提取所述颜色特征,所述第二提取子模块用于从所述亮度分量中提取所述纹理特征和变换域特征。所述纹理特征和变换域特征的具体内容可参照上述实施例一中的描述,此处不再赘述。
本实施例中,所述匹配模块304用于将所述特征信息与特征数据库中的特征值进行匹配,所述确定模块305用于当所述特征信息与特征数据库中的特征值匹配成功时,根据匹配的特征值确定出所述场景的类别,所述查询模块306用于从增强方法数据库中查询出与所述场景的类别对应的增强方法。所述特征数据库是由多个场景的特征值建立的数据库。本实施例中的场景可以依据景物进行划分,所述场景包括例如天空,水面,植被,白雪,建筑物等。当然,在实际应用中,也可以以其他依据划分为其他场景。所述增强方法数据库是由不同场景对应的增强方法建立的数据库。所述增强方法包括对比度增强、图像去噪、边缘锐化、颜色增强等处理方法。利用与所述场景对应的增强方法有针对性地对图像进行处理,使得处理后的图像更加符合人眼对图像的认知,从而达到最佳的图像显示效果。
本实施例中,所述处理单元203包括第二转换模块307,所述第二转换模块307用于将所述增强图像从第二颜色空间转换到第一颜色空间。通过对原始图像进行针对性的处理,得到增强后的图像。由于显示系统一般采用第一颜色空间,因此还需要将所述增强图像转换到第二颜色空间,从而实现图像的显示。
本实施例提供的图像处理装置中,从原始图像中识别出多个场景,确定出与所述场景对应的增强方法,通过所述增强方法对对应的场景进行图像处理以获得增强图像。根据不同场景的特点,有针对性的采用不同的图像处理方法,使得处理后的图像更加符合人眼对图像的认 知,从而达到最佳的图像显示效果。
实施例三
本实施例提供一种显示设备,包括实施例二提供的图像处理装置,具体内容可参照上述实施例二中的描述,此处不再赘述。
本发明提供的显示设备中,从原始图像中识别出多个场景,确定出与所述场景对应的增强方法,通过所述增强方法对对应的场景进行图像处理以获得增强图像。根据不同场景的特点,有针对性的采用不同的图像处理方法,使得处理后的图像更加符合人眼对图像的认知,从而达到最佳的图像显示效果。
可以理解的是,以上实施方式仅仅是为了说明本发明的原理而采用的示例性实施方式,然而本发明并不局限于此。对于本领域内的普通技术人员而言,在不脱离本发明的精神和实质的情况下,可以做出各种变型和改进,这些变型和改进也视为本发明的保护范围。

Claims (13)

  1. 一种图像处理方法,包括:
    从原始图像中识别出至少一个场景;
    确定出与所述场景对应的增强方法;
    通过所述增强方法对对应的场景进行图像处理,以获得增强图像。
  2. 根据权利要求1所述的图像处理方法,其中,所述确定出与所述场景对应的增强方法的步骤包括:
    从所述场景中提取特征信息;
    将所述特征信息与特征数据库中的特征值进行匹配;
    若所述特征信息与特征数据库中的特征值匹配成功,根据匹配的特征值确定出所述场景的类别;
    从增强方法数据库中查询出与所述场景的类别对应的增强方法,以确定出与所述场景对应的增强方法。
  3. 根据权利要求1或2所述的图像处理方法,其中,所述从原始图像中识别出至少一个场景的步骤包括:
    将所述原始图像从第一颜色空间转换到第二颜色空间,所述第一颜色空间包括红、绿、蓝三个分量,所述第二颜色空间包括一个亮度分量和两个色度分量。
  4. 根据权利要求3所述的图像处理方法,其中,所述从原始图像中识别出至少一个场景的步骤还包括:
    在所述第二颜色空间中将所述原始图像分割成多个场景。
  5. 根据权利要求2所述的图像处理方法,其中,所述特征信息包括颜色特征、纹理特征和变换域特征,所述从所述场景中提取特征信息的步骤包括:
    从所述两个色度分量中提取所述颜色特征;
    从所述亮度分量中提取所述纹理特征和变换域特征。
  6. 根据权利要求1所述的图像处理方法,其特征在于,所述通过所述增强方法对对应的场景进行图像处理,以获得增强图像的步骤包括:
    将所述增强图像从第二颜色空间转换到第一颜色空间。
  7. 一种图像处理装置,包括:
    识别单元,用于从原始图像中识别出至少一个场景;
    确定单元,用于确定出与所述场景对应的增强方法;
    处理单元,用于通过所述增强方法对对应的场景进行图像处理,以获得增强图像。
  8. 根据权利要求7所述的图像处理装置,其中,所述确定单元包括:
    提取模块,用于从所述场景中提取特征信息;
    匹配模块,用于将所述特征信息与特征数据库中的特征值进行匹配;
    确定模块,用于当所述特征信息与特征数据库中的特征值匹配成功时,根据匹配的特征值确定出所述场景的类别;
    查询模块,用于从增强方法数据库中查询出与所述场景的类别对应的增强方法,以确定出与所述场景对应的增强方法。
  9. 根据权利要求7或8所述的图像处理装置,其中,所述识别单元包括:
    第一转换模块,用于将所述原始图像从第一颜色空间转换到第二颜色空间,所述第一颜色空间包括红、绿、蓝三个分量,所述第二颜色空间包括一个亮度分量和两个色度分量。
  10. 根据权利要求9所述的图像处理装置,其特征在于,所述识 别单元还包括:
    分割模块,用于在所述第二颜色空间中将所述原始图像分割成多个场景。
  11. 根据权利要求8所述的图像处理装置,其特征在于,所述特征信息包括颜色特征、纹理特征和变换域特征,所述提取模块包括:
    第一提取子模块,用于从所述两个色度分量中提取所述颜色特征;
    第二提取子模块,用于从所述亮度分量中提取所述纹理特征和变换域特征。
  12. 根据权利要求9所述的图像处理装置,其特征在于,所述处理单元包括:
    第二转换模块,用于将所述增强图像从第二颜色空间转换到第一颜色空间。
  13. 一种显示设备,包括权利要求7-12任一所述的图像处理装置。
PCT/CN2015/076938 2014-10-11 2015-04-20 图像处理方法、图像处理装置及显示设备 WO2016054904A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP15762465.1A EP3206185B1 (en) 2014-10-11 2015-04-20 Image processing method, image processing device and display device
US14/777,851 US20160293138A1 (en) 2014-10-11 2015-04-20 Image processing method, image processing apparatus and display device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201410534853.3 2014-10-11
CN201410534853.3A CN104299196A (zh) 2014-10-11 2014-10-11 一种图像处理装置及方法、显示设备

Publications (1)

Publication Number Publication Date
WO2016054904A1 true WO2016054904A1 (zh) 2016-04-14

Family

ID=52318917

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/076938 WO2016054904A1 (zh) 2014-10-11 2015-04-20 图像处理方法、图像处理装置及显示设备

Country Status (4)

Country Link
US (1) US20160293138A1 (zh)
EP (1) EP3206185B1 (zh)
CN (1) CN104299196A (zh)
WO (1) WO2016054904A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108701351A (zh) * 2016-10-17 2018-10-23 华为技术有限公司 一种图像显示增强方法及装置
CN111223058A (zh) * 2019-12-27 2020-06-02 杭州雄迈集成电路技术股份有限公司 一种图像增强方法
CN113409417A (zh) * 2021-07-15 2021-09-17 南京信息工程大学 一种基于小波变换的莫尔条纹信息提取方法

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299196A (zh) * 2014-10-11 2015-01-21 京东方科技集团股份有限公司 一种图像处理装置及方法、显示设备
CN104935902B (zh) * 2015-06-02 2017-03-15 三星电子(中国)研发中心 图像色彩增强方法、装置及电子设备
KR102329821B1 (ko) 2015-06-04 2021-11-23 삼성전자주식회사 개인 인증 전자 장치 및 방법
CN105184748A (zh) * 2015-09-17 2015-12-23 电子科技大学 图像比特深度增强方法
CN106780447B (zh) * 2016-12-02 2019-07-16 北京航星机器制造有限公司 一种智能选择图像增强方法
CN109840526A (zh) * 2017-11-27 2019-06-04 中国移动国际有限公司 一种基于对象的社交方法及装置
CN108462876B (zh) * 2018-01-19 2021-01-26 瑞芯微电子股份有限公司 一种视频解码优化调整装置及方法
CN108805838B (zh) * 2018-06-05 2021-03-02 Oppo广东移动通信有限公司 一种图像处理方法、移动终端及计算机可读存储介质
CN108776959B (zh) * 2018-07-10 2021-08-06 Oppo(重庆)智能科技有限公司 图像处理方法、装置及终端设备
CN112087648B (zh) * 2019-06-14 2022-02-25 腾讯科技(深圳)有限公司 图像处理方法、装置、电子设备及存储介质
CN111031346B (zh) * 2019-10-28 2021-11-02 网宿科技股份有限公司 一种增强视频画质的方法和装置
CN111507911A (zh) * 2020-04-02 2020-08-07 广东九联科技股份有限公司 一种基于深度学习的图像质量处理方法
CN112599076A (zh) * 2020-12-04 2021-04-02 浪潮电子信息产业股份有限公司 一种显示器显示方法及相关装置
CN112907457A (zh) * 2021-01-19 2021-06-04 Tcl华星光电技术有限公司 图像处理方法、图像处理装置及计算机设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040212725A1 (en) * 2003-03-19 2004-10-28 Ramesh Raskar Stylized rendering using a multi-flash camera
CN102473291A (zh) * 2009-07-20 2012-05-23 汤姆森特许公司 体育视频中的远视场景的检测和自适应视频处理方法
CN104202604A (zh) * 2014-08-14 2014-12-10 腾讯科技(深圳)有限公司 视频增强的方法和装置
CN104299196A (zh) * 2014-10-11 2015-01-21 京东方科技集团股份有限公司 一种图像处理装置及方法、显示设备

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359572B2 (en) * 2003-03-26 2008-04-15 Microsoft Corporation Automatic analysis and adjustment of digital images with exposure problems
US7440593B1 (en) * 2003-06-26 2008-10-21 Fotonation Vision Limited Method of improving orientation and color balance of digital images using face detection information
JP3925476B2 (ja) * 2003-08-08 2007-06-06 セイコーエプソン株式会社 撮影場面の判定および撮影場面に応じた画像処理
US7426312B2 (en) * 2005-07-05 2008-09-16 Xerox Corporation Contrast enhancement of images
TW200812401A (en) * 2006-08-23 2008-03-01 Marketech Int Corp Image adjusting device
JP2008282267A (ja) * 2007-05-11 2008-11-20 Seiko Epson Corp シーン識別装置、及び、シーン識別方法
US7933454B2 (en) * 2007-06-25 2011-04-26 Xerox Corporation Class-based image enhancement system
JP4799511B2 (ja) * 2007-08-30 2011-10-26 富士フイルム株式会社 撮像装置および方法並びにプログラム
US8285059B2 (en) * 2008-05-20 2012-10-09 Xerox Corporation Method for automatic enhancement of images containing snow
JP4772839B2 (ja) * 2008-08-13 2011-09-14 株式会社エヌ・ティ・ティ・ドコモ 画像識別方法および撮像装置
CN102222328B (zh) * 2011-07-01 2012-10-03 杭州电子科技大学 一种边缘保持的自然场景图像自适应加权滤波方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040212725A1 (en) * 2003-03-19 2004-10-28 Ramesh Raskar Stylized rendering using a multi-flash camera
CN102473291A (zh) * 2009-07-20 2012-05-23 汤姆森特许公司 体育视频中的远视场景的检测和自适应视频处理方法
CN104202604A (zh) * 2014-08-14 2014-12-10 腾讯科技(深圳)有限公司 视频增强的方法和装置
CN104299196A (zh) * 2014-10-11 2015-01-21 京东方科技集团股份有限公司 一种图像处理装置及方法、显示设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3206185A4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108701351A (zh) * 2016-10-17 2018-10-23 华为技术有限公司 一种图像显示增强方法及装置
CN108701351B (zh) * 2016-10-17 2022-03-29 华为技术有限公司 一种图像显示增强方法及装置
CN111223058A (zh) * 2019-12-27 2020-06-02 杭州雄迈集成电路技术股份有限公司 一种图像增强方法
CN111223058B (zh) * 2019-12-27 2023-07-18 杭州雄迈集成电路技术股份有限公司 一种图像增强方法
CN113409417A (zh) * 2021-07-15 2021-09-17 南京信息工程大学 一种基于小波变换的莫尔条纹信息提取方法
CN113409417B (zh) * 2021-07-15 2023-05-30 南京信息工程大学 一种基于小波变换的莫尔条纹信息提取方法

Also Published As

Publication number Publication date
EP3206185A4 (en) 2018-03-14
CN104299196A (zh) 2015-01-21
EP3206185B1 (en) 2021-03-10
EP3206185A1 (en) 2017-08-16
US20160293138A1 (en) 2016-10-06

Similar Documents

Publication Publication Date Title
WO2016054904A1 (zh) 图像处理方法、图像处理装置及显示设备
CN108596849B (zh) 一种基于天空区域分割的单幅图像去雾方法
US11127122B2 (en) Image enhancement method and system
Zhu et al. Single image dehazing using color attenuation prior.
CN108090888B (zh) 基于视觉注意模型的红外图像和可见光图像的融合检测方法
CN106296600B (zh) 一种基于小波变换图像分解的对比度增强方法
CN108460757A (zh) 一种手机TFT-LCD屏Mura缺陷在线自动检测方法
WO2016206087A1 (zh) 一种低照度图像处理方法和装置
WO2018023916A1 (zh) 一种彩色图像去阴影方法和应用
CN102903081A (zh) 基于rgb彩色模型的低光照图像增强方法
CN108288258A (zh) 一种针对恶劣天气条件下的低质图像增强方法
CN105678245A (zh) 一种基于哈尔特征的靶位识别方法
CN111079688A (zh) 一种人脸识别中的基于红外图像的活体检测的方法
CN109785321A (zh) 基于深度学习和Gabor滤波器的睑板腺区域提取方法
CN105574826B (zh) 遥感影像的薄云去除方法
WO2020130799A1 (en) A system and method for licence plate detection
CN107067386B (zh) 一种基于相对全局直方图拉伸的浅海水下图像增强方法
CN107256539B (zh) 一种基于局部对比度的图像锐化方法
Liu et al. Single image haze removal via depth-based contrast stretching transform
CN112750089B (zh) 基于局部块最大和最小像素先验的光学遥感影像去雾方法
Ghani et al. Integration of enhanced background filtering and wavelet fusion for high visibility and detection rate of deep sea underwater image of underwater vehicle
CN105405110A (zh) 非均匀光照补偿方法
CN111611940A (zh) 一种基于大数据处理的快速视频人脸识别方法
CN106993186B (zh) 一种立体图像显著性检测方法
CN114463814A (zh) 一种基于图像处理的快速证件照眼镜检测方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 14777851

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2015762465

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2015762465

Country of ref document: EP

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15762465

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE