WO2020082686A1 - 图像处理方法、装置及计算机可读存储介质 - Google Patents

图像处理方法、装置及计算机可读存储介质 Download PDF

Info

Publication number
WO2020082686A1
WO2020082686A1 PCT/CN2019/079510 CN2019079510W WO2020082686A1 WO 2020082686 A1 WO2020082686 A1 WO 2020082686A1 CN 2019079510 W CN2019079510 W CN 2019079510W WO 2020082686 A1 WO2020082686 A1 WO 2020082686A1
Authority
WO
WIPO (PCT)
Prior art keywords
value
pixel
saliency
image
processed
Prior art date
Application number
PCT/CN2019/079510
Other languages
English (en)
French (fr)
Inventor
马瑞翔
陈洪波
全晓荣
付谨学
Original Assignee
深圳创维-Rgb电子有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳创维-Rgb电子有限公司 filed Critical 深圳创维-Rgb电子有限公司
Priority to EP19875761.9A priority Critical patent/EP3751510A4/en
Priority to US16/976,687 priority patent/US20210049786A1/en
Publication of WO2020082686A1 publication Critical patent/WO2020082686A1/zh

Links

Images

Classifications

    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Definitions

  • the present application relates to the field of image processing, and in particular, to an image processing method, device, and computer-readable storage medium.
  • Image saliency comes from the uniqueness, unpredictability, scarcity and singularity of human vision, and is caused by image features such as color, gradient, and edge. Experiments show that the brain is more likely to respond to stimuli in high-contrast areas in the image. How to effectively capture effective features from images has become a difficult point.
  • a saliency calculation method based on local contrast and a saliency calculation method based on global contrast.
  • Images processed based on the HC algorithm can better highlight the inside of prominent targets, but some target edges are not easy to make; images processed based on the RC algorithm can better highlight the edges of prominent targets, but the inside of the target is not uniform enough.
  • the main purpose of the present application is to provide an image processing method, device and computer-readable storage medium, aiming to solve the technical problem that the existing image enhancement algorithms cannot simultaneously highlight the interior of the salient target and its edges.
  • the present application provides an image processing method.
  • the method for enhancing image saliency includes the following steps:
  • a saliency map corresponding to the image to be processed is determined based on the target saliency value.
  • the present application also provides an image processing device, the image processing device includes: a memory, a processor, and computer readable instructions stored on the memory and operable on the processor, When the computer-readable instructions are executed by the processor, the steps of the foregoing image processing method are realized.
  • the present application also provides a computer-readable storage medium having computer-readable instructions stored on the computer-readable storage medium, the computer-readable instructions being executed by the processor to achieve the aforementioned The steps of the image processing method.
  • This application calculates the first saliency value corresponding to each pixel in the image to be processed by using the HC algorithm based on the color data of the image to be processed, and then calculates the corresponding value of each pixel in the image to be processed based on the RC algorithm
  • the second saliency value calculates the target saliency value corresponding to each pixel in the image to be processed, and finally based on the target saliency value
  • the saliency map corresponding to the image to be processed is determined so that the saliency map can simultaneously highlight the interior of the image to be processed and its edges, making it more in line with human visual attention mechanism.
  • FIG. 1 is a schematic structural diagram of an image processing apparatus of a hardware operating environment according to an embodiment of the present application
  • FIG. 2 is a schematic flowchart of the first embodiment of the image processing method of the present application.
  • the HC algorithm Based on the color data of the image to be processed, using the HC algorithm to calculate the first saliency value corresponding to each pixel in the image to be processed; based on the RC algorithm to calculate the second saliency corresponding to each pixel in the image to be processed Value; based on the first saliency value and the second saliency value, calculate the target saliency value corresponding to each pixel in the image to be processed; determine the image to be processed based on the target saliency value Corresponding saliency map.
  • the saliency map obtained can not only highlight the interior of the salient target, but also the edges, which is more in line with human visual attention mechanism.
  • FIG. 1 is a schematic structural diagram of an image processing apparatus in a hardware operating environment according to an embodiment of the present application.
  • the image processing device in the embodiment of the present application may be a PC, a smart phone, a tablet computer, an e-book reader, MP3 (Moving Pictures Experts Group Audio Layer III, motion picture expert compression standard audio level 3) player, MP4 (Moving Picture, Experts, Group, Audio, Layer IV, motion picture expert compression standard audio level 4) Players, portable computers and other mobile terminal devices with display functions.
  • MP3 Motion Pictures Experts Group Audio Layer III, motion picture expert compression standard audio level 3
  • MP4 Moving Picture, Experts, Group, Audio, Layer IV, motion picture expert compression standard audio level 4
  • portable computers and other mobile terminal devices with display functions may be a PC, a smart phone, a tablet computer, an e-book reader, MP3 (Moving Pictures Experts Group Audio Layer III, motion picture expert compression standard audio level 3) player, MP4 (Moving Picture, Experts, Group, Audio, Layer IV, motion picture expert compression standard audio level 4) Players, portable computers and other mobile terminal devices with display functions.
  • MP3 Motion Picture Experts
  • the image processing apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, and a communication bus 1002.
  • the communication bus 1002 is used to implement connection communication between these components.
  • the user interface 1003 may include a display (Display), an input unit such as a keyboard (Keyboard), and the optional user interface 1003 may also include a standard wired interface and a wireless interface.
  • the network interface 1004 may optionally include a standard wired interface and a wireless interface (such as a WI-FI interface).
  • the memory 1005 may be a high-speed RAM memory or a non-volatile memory (non-volatile memory), such as a disk memory.
  • the memory 1005 may optionally be a storage device independent of the foregoing processor 1001.
  • the image processing device may further include a camera, an RF (Radio Frequency) circuit, a sensor, an audio circuit, a WiFi module, and so on.
  • sensors such as light sensors, motion sensors and other sensors.
  • the image processing device may also be configured with other sensors such as a gyroscope, barometer, hygrometer, thermometer, infrared sensor, etc., which will not be repeated here.
  • FIG. 1 does not constitute a limitation on the image processing apparatus, and may include more or fewer components than those illustrated, or combine certain components, or different components Layout.
  • the memory 1005 as a computer storage medium may include an operating system, a network communication module, a user interface module, and a network operation control application program.
  • the network interface 1004 is mainly used to connect to a background server and perform data communication with the background server;
  • the user interface 1003 is mainly used to connect to a client (user side) and perform data communication with the client;
  • the processor 1001 may be used to call computer-readable instructions stored in the memory 1005 and perform the following operations:
  • a saliency map corresponding to the image to be processed is determined based on the target saliency value.
  • processor 1001 may call the computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the first saliency value corresponding to the first pixel is determined based on the first color distance.
  • processor 1001 can call computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the pixels of the to-be-processed image are sequentially traversed, and based on the Lab color model, the first pixel between the currently traversed first pixel and other pixels is acquired One color distance.
  • processor 1001 may call the computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the second color distance between the target pixel and the third pixel in the second pixel is obtained, wherein The third pixel point is other pixel points than the second pixel point in the image to be processed;
  • the first saliency value is determined based on the saliency value of the second pixel and the saliency value of the third pixel.
  • processor 1001 may call the computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the saliency value corresponding to the sub-region is calculated based on the RC algorithm, and the second saliency value is determined based on the saliency value corresponding to the sub-region.
  • processor 1001 may call the computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the saliency value corresponding to the first sub-region is determined based on the acquired spatial distance, and the second saliency value is determined based on the saliency value corresponding to the first sub-region.
  • processor 1001 may call the computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the saliency value corresponding to the first sub-region is determined based on the spatial distance and the spatial weight.
  • processor 1001 may call the computer-readable instructions stored in the memory 1005, and also perform the following operations:
  • the sum of the first weight and the second weight is 1, and the range of the first weight is 0.35 to 0.45.
  • the present application also provides an image processing method.
  • the image processing method includes the following steps:
  • Step S100 based on the color data of the image to be processed, using the HC algorithm to calculate the first saliency value corresponding to each pixel in the image to be processed;
  • the color data of the image to be processed includes Lab data of each pixel of the image to be processed based on the Lab color model, and based on the Lab data, each pixel of the image to be processed in the L * a * b space is determined. Based on the color distance, the HC algorithm is used to calculate the first saliency value corresponding to each pixel in the image to be processed.
  • the saliency value of the pixel is calculated by the following formula:
  • I k represents the pixel to be calculated
  • S (I k ) is the first saliency value of the I k pixel
  • I i represents the other pixels
  • D (I k , I i ) is the space L * a * The color distance between pixels Ik and Ii in b.
  • Step S200 Calculate the second saliency value corresponding to each pixel in the image to be processed based on the RC algorithm
  • the SLIC algorithm is first used to divide the image to be processed into several sub-regions, each sub-region includes a pixel, based on the RC algorithm, the spatial distance between each sub-region is determined, and each sub-region is calculated according to the spatial distance The corresponding second significance value.
  • w (r i) r i is the weight value of the area, i.e. the number of pixels of the region r i, D r (r k, r i) of space between the regions and the region r i r k distance.
  • Step S300 based on the first saliency value and the second saliency value, calculate a target saliency value corresponding to each pixel in the image to be processed;
  • the first saliency value obtained by the HC algorithm is S (I k )
  • the second saliency value obtained by the RC algorithm is S (r k )
  • the target saliency value corresponding to each pixel in the image to be processed is calculated based on S (I k ) and S (r k ).
  • is a proportional control factor, and ⁇ ranges from 0.35 to 0.45.
  • the two saliency values are assigned a weight value of 1, and the sum is obtained to obtain the final saliency value.
  • This formula is proposed based on the above algorithm, and the scope of protection is not limited to this formula. All calculation methods based on the above algorithm are within the scope of protection.
  • the saliency map corresponding to the image to be processed is determined based on the target saliency value.
  • the algorithm can detect the salient target better; and based on the RC algorithm
  • the saliency map obtained by the picture takes the spatial relationship into consideration, can highlight the target clearly, and dim the background. Therefore, the saliency map of this embodiment can not only highlight the inside of the image to be processed well, but also highlight the edges thereof, and achieve the effect of simultaneously highlighting the inside of the image to be processed and its edges.
  • the image processing method proposed in this embodiment calculates the first saliency value corresponding to each pixel in the image to be processed by using the HC algorithm based on the color data of the image to be processed, and then calculates the image to be processed based on the RC algorithm
  • the second saliency value corresponding to each pixel in the pixel calculates the target saliency value corresponding to each pixel in the image to be processed
  • the saliency map corresponding to the image to be processed is determined based on the target saliency value, so that the saliency map can simultaneously highlight the interior and edges of the image to be processed, making it more in line with human visual attention mechanism.
  • step S100 includes:
  • each pixel in the image to be processed is traversed in sequence, and the currently traversed pixel is set as the first pixel.
  • the first pixel between the first pixel and other pixels is acquired.
  • the color distance, and the first saliency value corresponding to the first pixel is determined based on the acquired first color distance until the traversal of all pixels in the image to be processed is completed.
  • the sum of the first color distances is the saliency value of the first pixel, and the saliency value of the pixel is calculated by the following formula:
  • I k represents the pixel to be calculated
  • S (I k ) is the first saliency value of the I k pixel
  • I i represents the other pixels
  • D (I k , I i ) is the space L * a * The color distance between pixels Ik and Ii in b.
  • the image processing method proposed in this embodiment by sequentially traversing the pixels of the image to be processed, based on the Lab color model, the first color distance between the first traversed first pixel and other pixels is obtained, and then based on The first color distance determines the first saliency value corresponding to the first pixel, which can accurately determine the first saliency value according to the first color distance, thereby improving the accuracy of the target saliency value, so that the subsequent saliency map can be obtained At the same time, the interior and edges of the image to be processed are highlighted.
  • step S110 includes:
  • S111 Determine whether there are second pixels with the same color in the image to be processed
  • step S110 first determine whether there is a second pixel of the same color in the image to be processed, for example, calculate the RGB data of each pixel based on the Lab data of each pixel in the Lab color model based on the image to be processed, according to The RGB data of each pixel in the to-be-processed image determines whether there is a second pixel with the same color. If it does not exist, step S110 is executed.
  • the image processing method proposed in this embodiment by determining whether there are second pixels of the same color in the image to be processed; then, when there are no second pixels of the same color in the image to be processed, sequentially traversing the The pixels of the image to be processed, based on the Lab color model, obtain the first color distance between the first traversed first pixel and other pixels, and then when there are no pixels of the same color in the image to be processed, according to the first The color distance accurately determines the first significance value.
  • step S111 it further includes:
  • S114 Determine the saliency value of the target pixel based on the second color distance, and use the saliency value of the target pixel as the saliency value of the second pixel;
  • S117 Determine the first saliency value based on the saliency value of the second pixel and the saliency value of the third pixel.
  • any one of the second pixels is used as the target pixel, and based on the Lab color model, the target pixel and the third pixel are obtained.
  • the saliency value of the target pixel is determined based on the second color distance. In the Lab color model, the positions of pixels with the same color overlap, so the saliency value of the target pixel is the saliency value of the second pixel.
  • c l is the color value of the pixel I k
  • n is the number of pixels of different colors
  • f j is the same color image I c j is the number of pixels.
  • the target pixel and the third pixel of the second pixel are obtained.
  • the second color distance between determine the saliency value of the target pixel based on the second color distance, and use the saliency value of the target pixel as the saliency value of the second pixel, and then Sequentially traverse the third pixel, based on the Lab color model, obtain the third color distance between the currently traversed pixel and the fourth pixel, and the fourth color between the currently traversed pixel and the target pixel
  • the distance and the number of pixels of the second pixel then determine the saliency value of the third pixel based on the third color distance, the fourth color distance, and the number of pixels, and finally based on
  • the saliency value of the second pixel point and the saliency value of the third pixel point determine the first saliency value, so that when there are pixels of the same color in the image to be processed Simpl
  • step S200 includes:
  • S220 Calculate the saliency value corresponding to the sub-region based on the RC algorithm, and determine the second saliency value based on the saliency value corresponding to the sub-region.
  • the image is first segmented using the SLIC algorithm to obtain multiple sub-regions, and each sub-region contains only one pixel.
  • w (r i) r i is the weight value of the area, i.e. the number of pixels of the region r i, D r (r k, r i) of space between the regions and the region r i r k distance.
  • the image processing method proposed in this embodiment obtains multiple sub-regions by using the SLIC algorithm to segment the image to be processed, and then calculates the saliency value corresponding to the sub-region based on the RC algorithm, and based on the corresponding The saliency value determines the second saliency value, which can accurately determine the second saliency value according to the sub-region, thereby improving the accuracy and efficiency of the second saliency value.
  • step S220 includes:
  • S221 traverse each of the sub-regions in sequence to obtain the spatial distance between the first traversed first sub-region and the second sub-region, where the second sub-region is the first sub-region of each of the sub-regions Other sub-regions outside the region;
  • each sub-region in the image to be processed is traversed sequentially, and the currently traversed sub-region is set as the first sub-region, the spatial distance between the first sub-region and other sub-regions is acquired, and based on the acquired The spatial distance of determines the saliency value corresponding to the first sub-region until the traversal of all sub-regions in the image to be processed is completed.
  • the sum of the first spatial distances is the significance value of the first sub-region, and the significance value of the sub-region is calculated by the following formula:
  • the spatial distance between the first traversed first sub-region and other sub-regions is obtained, and then the first distance is determined based on the spatial distance
  • the saliency value corresponding to the sub-region can accurately determine the saliency value corresponding to the first sub-region according to the spatial distance, thereby improving the accuracy of the target saliency value, so that the subsequent saliency map can highlight the edges of the image to be processed.
  • step S222 includes:
  • the saliency value corresponding to the first sub-region is determined based on the spatial distance between the first sub-region and other sub-regions by acquiring the spatial weight corresponding to the first sub-region.
  • D r (r k , r i ) is the spatial distance between the sub-region r k and other sub-regions r i
  • ⁇ s is the spatial weight corresponding to the sub-region r k
  • w (r i ) is the region r i The number of pixels.
  • the spatial weight corresponding to the first sub-region is obtained, and then the saliency value corresponding to the first sub-region is determined based on the spatial distance between the first sub-region and other sub-regions.
  • the spatial weight value corresponding to the region to determine the saliency value corresponding to the first sub-region can increase the influence of the closer region and reduce the influence of the far region, so that the subsequent saliency map can highlight the edges of the image to be processed.
  • step S300 includes:
  • S310 Obtain a first weight value corresponding to the first saliency value, and a second weight value corresponding to the second saliency value;
  • the sum of the first weight and the second weight is 1, and the range of the first weight is 0.35 to 0.45.
  • the first weight value corresponding to the first saliency value and the second weight value corresponding to the second saliency value are first obtained, and then based on the first saliency value, the first weight value, and the second saliency value And a second weight value to calculate the target significance value, wherein the sum of the first weight value and the second weight value is 1, and the first weight value range is 0.35 to 0.45.
  • the formula for calculating the target significance value is as follows:
  • S is the target significance value
  • is the first weight value
  • S (I k ) is the first significance value
  • S (r k ) is the second significance value
  • (1- ⁇ ) is the second weight value .
  • the image processing method proposed in this embodiment obtains the first weight value corresponding to the first saliency value and the second weight value corresponding to the second saliency value, and then based on the first saliency value, the first weight value, and the second The significance value and the second weight value are used to calculate the target significance value.
  • the resulting saliency map can simultaneously highlight the interior and edges of the image to be processed.
  • embodiments of the present application further propose a computer-readable storage medium, on which computer-readable instructions are stored, and the computer-readable instructions are executed by a processor to implement the following operations:
  • the first saliency value corresponding to the first pixel is determined based on the first color distance.
  • the pixels of the to-be-processed image are sequentially traversed, and based on the Lab color model, the first pixel between the currently traversed first pixel and other pixels is acquired One color distance.
  • the second color distance between the target pixel and the third pixel in the second pixel is obtained, wherein The third pixel point is other pixel points than the second pixel point in the image to be processed;
  • the first saliency value is determined based on the saliency value of the second pixel and the saliency value of the third pixel.
  • the saliency value corresponding to the sub-region is calculated based on the RC algorithm, and the second saliency value is determined based on the saliency value corresponding to the sub-region.
  • the saliency value corresponding to the first sub-region is determined based on the acquired spatial distance, and the second saliency value is determined based on the saliency value corresponding to the first sub-region.
  • the saliency value corresponding to the first sub-region is determined based on the spatial distance and the spatial weight.
  • the sum of the first weight and the second weight is 1, and the range of the first weight is 0.35 to 0.45.
  • the methods in the above embodiments can be implemented by means of software plus a necessary general hardware platform, and of course, can also be implemented by hardware, but in many cases the former is better Implementation.
  • the technical solution of the present application can be embodied in the form of a software product in essence or part that contributes to the existing technology, and the computer software product is stored in a storage medium (such as ROM / RAM) as described above , Magnetic disks, optical disks), including several instructions to enable a terminal device (which may be a mobile phone, computer, server, air conditioner, or network device, etc.) to perform the method described in each embodiment of the present application.

Abstract

本申请公开了一种图像处理方法,包括以下步骤:基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值;基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值;基于所述目标显著性值确定所述待处理图像对应的显著图。本申请还公开了一种图像处理装置及计算机可读存储介质。本申请使得该显著图能够同时突出显著待处理图像的内部及其边缘,使其更符合人类的视觉注意机制。

Description

图像处理方法、装置及计算机可读存储介质 技术领域
本申请涉及图像处理领域,尤其涉及一种图像处理方法、装置及计算机可读存储介质。
背景技术
图像显著性源于人类视觉的独特性、不可预测性、稀缺性及奇异性,并且是由颜色、梯度、边缘等图像特征所致。实验证明、大脑更容易响应图像中的高对比度区域的刺激。如何有效从图像中抓取有效特征,成为一个难点。现有的图像显著区域检测方法主要有两种:基于局部对比的显著性计算方法和基于全局对比的显著性计算方法。
基于HC算法处理的图像能够较好地突出显著目标内部,但是一些目标边缘不易于做出来;基于RC算法处理的图像能够较好地突出显著目标的边缘,但是目标内部不够均匀。
上述内容仅用于辅助理解本申请的技术方案,并不代表承认上述内容是现有技术。
发明内容
本申请的主要目的在于提供一种图像处理方法、装置及计算机可读存储介质,旨在解决现有图像增强算法无法同时突出显著目标的内部及其边缘的技术问题。
为实现上述目的,本申请提供一种图像处理方法,所述增强图像显著性的方法包括以下步骤:
基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;
基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值;
基于所述第一显著性值以及所述第二显著性值,计算所述待处理 图像中每一个像素点对应的目标显著性值;
基于所述目标显著性值确定所述待处理图像对应的显著图。
此外,为实现上述目的,本申请还提供一种图像处理装置,所述图像处理装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机可读指令,所述计算机可读指令被所述处理器执行时实现前述的图像处理方法的步骤。
此外,为实现上述目的,本申请还提供一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被所述处理器执行时实现前述的图像处理方法的步骤。
本申请通过基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值,再基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值,然后基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值,最后基于所述目标显著性值确定所述待处理图像对应的显著图,使得该显著图能够同时突出显著待处理图像的内部及其边缘,使其更符合人类的视觉注意机制。
附图说明
图1是本申请实施例方案涉及的硬件运行环境的图像处理装置的结构示意图;
图2为本申请图像处理方法第一实施例的流程示意图。
本申请目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不 用于限定本申请。
本申请实施例的主要解决方案是:
基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值;基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值;基于所述目标显著性值确定所述待处理图像对应的显著图。
由于现有图像增强算法无法同时突出显著目标的内部及其边缘。
本申请提供一种解决方案,通过将两种算法结合起来,实现了得到的显著图既能够较好地突出显著目标的内部,又能较好地突出其边缘,更符合人类的视觉注意机制。
如图1所示,图1是本申请实施例方案涉及的硬件运行环境的图像处理装置的结构示意图。
本申请实施例图像处理装置可以是PC,也可以是智能手机、平板电脑、电子书阅读器、MP3(Moving Picture Experts Group Audio Layer III,动态影像专家压缩标准音频层面3)播放器、MP4(Moving Picture Experts Group Audio Layer IV,动态影像专家压缩标准音频层面4)播放器、便携计算机等具有显示功能的可移动式终端设备。
如图1所示,该图像处理装置可以包括:处理器1001,例如CPU,网络接口1004,用户接口1003,存储器1005,通信总线1002。其中,通信总线1002用于实现这些组件之间的连接通信。用户接口1003可以包括显示屏(Display)、输入单元比如键盘(Keyboard),可选用户接口1003还可以包括标准的有线接口、无线接口。网络接口1004可选的可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器1005可以是高速RAM存储器,也可以是稳定的存储器(non-volatile memory),例如磁盘存储器。存储器1005可选的还可以是独立于前述 处理器1001的存储装置。
可选地,图像处理装置还可以包括摄像头、RF(Radio Frequency,射频)电路,传感器、音频电路、WiFi模块等等。其中,传感器比如光传感器、运动传感器以及其他传感器。当然,图像处理装置还可配置陀螺仪、气压计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。
本领域技术人员可以理解,图1中示出的图像处理装置结构并不构成对图像处理装置的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图1所示,作为一种计算机存储介质的存储器1005中可以包括操作系统、网络通信模块、用户接口模块以及网络操作控制应用程序。
在图1所示的图像处理装置中,网络接口1004主要用于连接后台服务器,与后台服务器进行数据通信;用户接口1003主要用于连接客户端(用户端),与客户端进行数据通信;而处理器1001可以用于调用存储器1005中存储的计算机可读指令,并执行以下操作:
基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;
基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值;
基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值;
基于所述目标显著性值确定所述待处理图像对应的显著图。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离;
基于所述第一颜色距离确定所述第一像素点对应的第一显著性值。
进一步地,处理器1001可以调用存储器1005中存储的计算机可 读指令,还执行以下操作:
确定所述待处理图像中是否存在颜色相同的第二像素点;
在所述待处理图像中不存在颜色相同的第二像素点时,依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
在所述待处理图像中存在颜色相同的第二像素点时,基于Lab色彩模型,获取所述第二像素点中的目标像素点与第三像素点之间的第二颜色距离,其中,所述第三像素点为所述待处理图像中除所述第二像素点之外的其他像素点;
基于所述第二颜色距离确定所述目标像素点的显著性值,并将所述目标像素点的显著性值作为所述第二像素点的显著性值;
依次遍历所述第三像素点,基于Lab色彩模型,获取当前遍历的像素点与第四像素点之间的第三颜色距离、当前遍历的像素点与所述目标像素点之间的第四颜色距离以及所述第二像素点的像素点个数,其中,所述第四像素点为所述第三像素点中除当前遍历的像素点之外的其他像素点;
基于所述第三颜色距离、所述第四颜色距离以及所述像素点个数确定所述第三像素点的显著性值;
基于所述第二像素点的显著性值以及所述第三像素点的显著性值,确定所述第一显著性值。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
利用SLIC算法对所述待处理图像进行分割处理,获得多个子区域,其中,每个所述子区域包括一个像素点;
基于RC算法计算所述子区域对应的显著性值,基于所述子区域对应的显著性值确定所述第二显著性值。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
依次遍历各个所述子区域,获取当前遍历的第一子区域与第二子区域之间的空间距离,其中,所述第二子区域为各个所述子区域中除 所述第一子区域之外的其他子区域;
基于获取到的所述空间距离确定所述第一子区域对应的显著性值,基于所述第一子区域对应的显著性值确定所述第二显著性值。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
获取所述第一子区域对应的空间权值;
基于所述空间距离以及所述空间权值确定所述第一子区域对应的显著性值。
进一步地,处理器1001可以调用存储器1005中存储的计算机可读指令,还执行以下操作:
获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
参照图2,本申请还提供一种图像处理方法,所述图像处理方法包括以下步骤:
步骤S100,基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;
在本实施例中,待处理图像的颜色数据包括待处理图像在基于Lab色彩模型中各个像素点的Lab数据,基于该Lab数据确定L*a*b空间中该待处理图像的各个像素点之间的颜色距离,基于该颜色距离,采用HC算法计算待处理图像中每一个像素点对应的第一显著性值。
具体地,通过以下公式计算像素点的显著性值:
Figure PCTCN2019079510-appb-000001
其中,I k代表需要计算的像素点,S(I k)为该I k像素点的第一显著性值,I i代表其他像素点,D(I k,I i)是空间L*a*b中的像素I k和I i之 间的颜色距离。
步骤S200,基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值;
在本实施例中,首先使用SLIC算法把待处理图像分割为数个子区域,每个子区域包括一个像素点,基于RC算法,确定各个子区域之间的空间距离,并根据该空间距离计算各个子区域对应的第二显著性值。
例如,对于一个区域r k,其第二显著性值的计算公式如下:
Figure PCTCN2019079510-appb-000002
其中,w(r i)是区域r i的权值即区域r i的像素数,D r(r k,r i)为区域r k与区域r i之间的空间距离。
步骤S300,基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值;
在本实施例中,通过HC算法得到的第一显著性值为S(I k),通过RC算法得到第二显著性值为S(r k),在获取到S(I k)以及S(r k)时,基于S(I k)以及S(r k)计算待处理图像中每一个像素点对应的目标显著性值。
例如,为第一显著性值和第二显著性值分配和为1的权值,相加得到目标显著性值S,计算公式如下:
S=βS(I k)+(1-β)S(r k)
β为比例控制因子,β的范围为0.35~0.45。
进一步的,在本实施例中,为两显著性值分配和为1的权值,相加得到最终显著值。此公式基于上述算法提出,所保护范围并不仅限于此公式,凡是基于上述算法提出的计算方式均在保护范围之内。
S400,基于所述目标显著性值确定所述待处理图像对应的显著图。
在本实施例中,在获取到目标显著性值,基于该目标显著性值确定该待处理图像对应的显著图。
由于基于HC算法的显著性图效果较好,分辨率高,没有过多损失细节,当显著目标和背景的颜色差异较大时,该算法就可以较好的检测到显著目标;而基于RC算法的得到图片显著性图由于考虑到空间关系,能够将目标清晰地凸显出来,将背景进行暗淡处理。因此, 本实施例的显著图既能够较好地突出显著待处理图像的内部,又能较好地突出其边缘,达到同时突出显著待处理图像的内部及其边缘的效果。
本实施例提出的图像处理方法,通过基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值,再基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值,然后基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值,最后基于所述目标显著性值确定所述待处理图像对应的显著图,使得该显著图能够同时突出显著待处理图像的内部及其边缘,使其更符合人类的视觉注意机制。
基于第一实施例,提出本申请图像处理方法的第二实施例,在本实施例中,步骤S100包括:
S110,依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离;
S120,基于所述第一颜色距离确定所述第一像素点对应的第一显著性值。
在本实施例中,依次遍历待处理图像中的各个像素点,并将当前遍历的像素点设置为第一像素点,基于Lab色彩模型,获取第一像素点与其他像素点之间的第一颜色距离,并基于获取到的第一颜色距离确定第一像素点对应的第一显著性值,直至该待处理图像中的所有像素点遍历完成。
具体地,该第一颜色距离之和为该第一像素点的显著性值,通过以下公式计算像素点的显著性值:
Figure PCTCN2019079510-appb-000003
其中,I k代表需要计算的像素点,S(I k)为该I k像素点的第一显著性值,I i代表其他像素点,D(I k,I i)是空间L*a*b中的像素I k和I i之间的颜色距离。
本实施例提出的图像处理方法,通过依次遍历所述待处理图像的 像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离,接着基于所述第一颜色距离确定所述第一像素点对应的第一显著性值,能够根据第一颜色距离准确确定第一显著性值,进而提高目标显著性值的准确性,使得后续得到的显著图能够同时突出显著待处理图像的内部及其边缘。
基于第二实施例,提出本申请图像处理方法的第三实施例,在本实施例中,步骤S110包括:
S111,确定所述待处理图像中是否存在颜色相同的第二像素点;
S112,在所述待处理图像中不存在颜色相同的第二像素点时,依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离。
在本实施例中,首先确定待处理图像中是否存在颜色相同的第二像素点,例如,基于待处理图像在基于Lab色彩模型中各个像素点的Lab数据计算各个像素点的RGB数据,根据该待处理图像中各个像素点的RGB数据,确定是否存在颜色相同的第二像素点,若不存在,则执行步骤S110。
本实施例提出的图像处理方法,通过确定所述待处理图像中是否存在颜色相同的第二像素点;接着在所述待处理图像中不存在颜色相同的第二像素点时,依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离,进而在待处理图像中不存在颜色相同的像素点时,根据第一颜色距离准确确定第一显著性值。
基于第三实施例,提出本申请图像处理方法的第四实施例,在本实施例中,在步骤S111之后,还包括:
S113,在所述待处理图像中存在颜色相同的第二像素点时,基于Lab色彩模型,获取所述第二像素点中的目标像素点与第三像素点之间的第二颜色距离,其中,所述第三像素点为所述待处理图像中除所述第二像素点之外的其他像素点;
S114,基于所述第二颜色距离确定所述目标像素点的显著性值,并将所述目标像素点的显著性值作为所述第二像素点的显著性值;
S115,依次遍历所述第三像素点,基于Lab色彩模型,获取当前遍历的像素点与第四像素点之间的第三颜色距离、当前遍历的像素点与所述目标像素点之间的第四颜色距离以及所述第二像素点的像素点个数,其中,所述第四像素点为所述第三像素点中除当前遍历的像素点之外的其他像素点;
S116,基于所述第三颜色距离、所述第四颜色距离以及所述像素点个数确定所述第三像素点的显著性值;
S117,基于所述第二像素点的显著性值以及所述第三像素点的显著性值,确定所述第一显著性值。
在本实施例中,若待处理图像中存在颜色相同的第二像素点,则将第二像素点中的任一像素点作为目标像素点,并基于Lab色彩模型,获取目标像素点与第三像素点之间的第二颜色距离,基于所述第二颜色距离确定所述目标像素点的显著性值。由于Lab色彩模型中,颜色相同的像素点位置重叠,因此,该目标像素点的显著性值即为第二像素点的显著性值。
其中,第三像素点的显著性值的计算公式如下:
Figure PCTCN2019079510-appb-000004
其中,c l是像素点I k中的颜色值,n是不同颜色像素点的数量,f j是图像I中颜色相同的像素点c j的个数。
本实施例提出的图像处理方法,通过在所述待处理图像中存在颜色相同的第二像素点时,基于Lab色彩模型,获取所述第二像素点中的目标像素点与第三像素点之间的第二颜色距离,接着基于所述第二颜色距离确定所述目标像素点的显著性值,并将所述目标像素点的显著性值作为所述第二像素点的显著性值,而后依次遍历所述第三像素点,基于Lab色彩模型,获取当前遍历的像素点与第四像素点之间的第三颜色距离、当前遍历的像素点与所述目标像素点之间的第四颜色距离以及所述第二像素点的像素点个数,然后基于所述第三颜色距离、所述第四颜色距离以及所述像素点个数确定所述第三像素点的显著性值,最后基于所述第二像素点的显著性值以及所述第三像素点的显著性值,确定所述第一显著性值,进而能够在待处理图像中存在颜色相同的像素点时,简化第一显著性值的计算过程,提高第一显著性值 的计算效率。
基于第一实施例,提出本申请图像处理方法的第五实施例,在本实施例中,步骤S200包括:
S210,利用SLIC算法对所述待处理图像进行分割处理,获得多个子区域,其中,每个所述子区域包括一个像素点;
S220,基于RC算法计算所述子区域对应的显著性值,基于所述子区域对应的显著性值确定所述第二显著性值。
在本实施例中,首先使用SLIC算法对图像进行分割处理,获得多个子区域,每个子区域仅包含一个像素点。
对于一个区域r k,其第二显著性值的计算公式如下:
Figure PCTCN2019079510-appb-000005
其中,w(r i)是区域r i的权值即区域r i的像素数,D r(r k,r i)为区域r k与区域r i之间的空间距离。
本实施例提出的图像处理方法,通过利用SLIC算法对所述待处理图像进行分割处理,获得多个子区域,接着基于RC算法计算所述子区域对应的显著性值,基于所述子区域对应的显著性值确定所述第二显著性值,能够根据子区域准确确定第二显著性值,进而提高第二显著性值的准确性及效率。
基于第五实施例,提出本申请图像处理方法的第六实施例,在本实施例中,步骤S220包括:
S221,依次遍历各个所述子区域,获取当前遍历的第一子区域与第二子区域之间的空间距离,其中,所述第二子区域为各个所述子区域中除所述第一子区域之外的其他子区域;
S222,基于获取到的所述空间距离确定所述第一子区域对应的显著性值,基于所述第一子区域对应的显著性值确定所述第二显著性值。
在本实施例中,依次遍历待处理图像中的各个子区域,并将当前遍历的子区域设置为第一子区域,获取第一子区域与其他子区域之间 的空间距离,并基于获取到的空间距离确定第一子区域对应的显著性值,直至该待处理图像中的所有子区域遍历完成。
具体地,该第一空间距离之和为该第一子区域的显著性值,通过以下公式计算子区域的显著性值:
Figure PCTCN2019079510-appb-000006
区域r i之间的空间距离。
本实施例提出的图像处理方法,通过依次遍历所述待处理图像的子区域,获取当前遍历的第一子区域与其他子区域之间的空间距离,接着基于所述空间距离确定所述第一子区域对应的显著性值,能够根据空间距离准确确定第一子区域对应的显著性值,进而提高目标显著性值的准确性,使得后续得到的显著图能够突出显著待处理图像的边缘。
基于第六实施例,提出本申请图像处理方法的第七实施例,在本实施例中,步骤S222包括:
S2221,获取所述第一子区域对应的空间权值;
S2222,基于所述空间距离以及所述空间权值确定所述第一子区域对应的显著性值。
在本实施例中,通过获取第一子区域对应的空间权值,基于第一子区域与其他子区域的空间距离来确定第一子区域对应的显著性值。
具体地,对于一个子区域r k,其显著性的计算公式如下:
Figure PCTCN2019079510-appb-000007
其中,D r(r k,r i)是子区域r k和其他子区域r i之间的空间距离,σ s为子区域r k对应的空间权值,w(r i)为区域r i的像素数。
本实施例提出的图像处理方法,通过获取第一子区域对应的空间权值,接着基于第一子区域与其他子区域的空间距离来确定第一子区域对应的显著性值,基于所述子区域对应的空间权值来确定第一子区域对应的显著性值,能够增加较近区域的影响并减少较远区域的影响, 使得后续得到的显著图能够突出显著待处理图像的边缘。
基于第一实施例,提出本申请图像处理方法的第八实施例,在本实施例中,步骤S300包括:
S310,获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
S320,基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
在本实施例中,首先获取第一显著性值对应的第一权值和第二显著性值对应的第二权值,再基于第一显著性值、第一权值、第二显著性值以及第二权值来计算所述目标显著性值,其中第一权值与第二权值之和为1,第一权值范围为0.35~0.45。
具体地,计算目标显著性值的公式如下:
S=βS(I k)+(1-β)S(r k)
其中,S为目标显著性值,β为第一权值,S(I k)为第一显著性值,S(r k)为第二显著性值,(1-β)为第二权值。
本实施例提出的图像处理方法,通过获取第一显著性值对应的第一权值和第二显著性值对应的第二权值,接着基于第一显著性值、第一权值、第二显著性值以及第二权值来计算目标显著性值。经过此方法的处理,使得最终得到的显著图能够同时突出显著待处理图像的内部及其边缘。
此外,本申请实施例还提出一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时实现如下操作:
基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;
基于RC算法计算所述待处理图像中每一个像素点对应的第二显 著性值;
基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值;
基于所述目标显著性值确定所述待处理图像对应的显著图
进一步地,所述计算机可读指令被处理器执行时还实现如下操作:
依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离;
基于所述第一颜色距离确定所述第一像素点对应的第一显著性值。
进一步地,所述计算机可读指令被处理器执行时还实现如下操作:
确定所述待处理图像中是否存在颜色相同的第二像素点;
在所述待处理图像中不存在颜色相同的第二像素点时,依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离。
进一步地,所述计算机可读指令被处理器执行时还实现如下操作:
在所述待处理图像中存在颜色相同的第二像素点时,基于Lab色彩模型,获取所述第二像素点中的目标像素点与第三像素点之间的第二颜色距离,其中,所述第三像素点为所述待处理图像中除所述第二像素点之外的其他像素点;
基于所述第二颜色距离确定所述目标像素点的显著性值,并将所述目标像素点的显著性值作为所述第二像素点的显著性值;
依次遍历所述第三像素点,基于Lab色彩模型,获取当前遍历的像素点与第四像素点之间的第三颜色距离、当前遍历的像素点与所述目标像素点之间的第四颜色距离以及所述第二像素点的像素点个数,其中,所述第四像素点为所述第三像素点中除当前遍历的像素点之外的其他像素点;
基于所述第三颜色距离、所述第四颜色距离以及所述像素点个数确定所述第三像素点的显著性值;
基于所述第二像素点的显著性值以及所述第三像素点的显著性值,确定所述第一显著性值。
进一步地,所述计算机可读指令被处理器执行时还实现如下操作:
利用SLIC算法对所述待处理图像进行分割处理,获得多个子区域,其中,每个所述子区域包括一个像素点;
基于RC算法计算所述子区域对应的显著性值,基于所述子区域对应的显著性值确定所述第二显著性值。
进一步地,所述计算机可读指令被处理器执行时还实现如下操作:
依次遍历各个所述子区域,获取当前遍历的第一子区域与第二子区域之间的空间距离,其中,所述第二子区域为各个所述子区域中除所述第一子区域之外的其他子区域;
基于获取到的所述空间距离确定所述第一子区域对应的显著性值,基于所述第一子区域对应的显著性值确定所述第二显著性值。
进一步地,所述计算机可读指令被处理器执行时还实现如下操作:
获取所述第一子区域对应的空间权值;
基于所述空间距离以及所述空间权值确定所述第一子区域对应的显著性值。
进一步地,所述计算机可读指令被处理器执行时还实现如下操作:
获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者系统不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者系统所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者系统中还存在另外的相同要素。
上述本申请实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解 到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在如上所述的一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本申请各个实施例所述的方法。
以上仅为本申请的优选实施例,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (15)

  1. 一种图像处理方法,其特征在于,所述图像处理方法包括以下步骤:
    基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;
    基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值;
    基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值;
    基于所述目标显著性值确定所述待处理图像对应的显著图。
  2. 如权利要求1所述的图像处理方法,所述基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值的步骤包括:
    依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离;
    基于所述第一颜色距离确定所述第一像素点对应的第一显著性值。
  3. 如权利要求2所述的图像处理方法,所述依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离的步骤包括:
    确定所述待处理图像中是否存在颜色相同的第二像素点;
    在所述待处理图像中不存在颜色相同的第二像素点时,依次遍历所述待处理图像的像素点,基于Lab色彩模型,获取当前遍历的第一像素点与其他像素点之间的第一颜色距离。
  4. 如权利要求3所述的图像处理方法,其特征在于,所述确定所述待处理图像中是否存在颜色相同的第二像素点的步骤之后,还包 括:
    在所述待处理图像中存在颜色相同的第二像素点时,基于Lab色彩模型,获取所述第二像素点中的目标像素点与第三像素点之间的第二颜色距离,其中,所述第三像素点为所述待处理图像中除所述第二像素点之外的其他像素点;
    基于所述第二颜色距离确定所述目标像素点的显著性值,并将所述目标像素点的显著性值作为所述第二像素点的显著性值;
    依次遍历所述第三像素点,基于Lab色彩模型,获取当前遍历的像素点与第四像素点之间的第三颜色距离、当前遍历的像素点与所述目标像素点之间的第四颜色距离以及所述第二像素点的像素点个数,其中,所述第四像素点为所述第三像素点中除当前遍历的像素点之外的其他像素点;
    基于所述第三颜色距离、所述第四颜色距离以及所述像素点个数确定所述第三像素点的显著性值;
    基于所述第二像素点的显著性值以及所述第三像素点的显著性值,确定所述第一显著性值。
  5. 如权利要求1所述的图像处理方法,其特征在于,所述基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值的步骤包括:
    利用SLIC算法对所述待处理图像进行分割处理,获得多个子区域,其中,每个所述子区域包括一个像素点;
    基于RC算法计算所述子区域对应的显著性值,基于所述子区域对应的显著性值确定所述第二显著性值。
  6. 如权利要求5所述的图像处理方法,其特征在于,所述基于RC算法计算所述子区域对应的显著性值,基于所述子区域对应的显著性值确定所述第二显著性值的步骤包括:
    依次遍历各个所述子区域,获取当前遍历的第一子区域与第二子区域之间的空间距离,其中,所述第二子区域为各个所述子区域中除所述第一子区域之外的其他子区域;
    基于获取到的所述空间距离确定所述第一子区域对应的显著性 值,基于所述第一子区域对应的显著性值确定所述第二显著性值。
  7. 如权利要求6所述的图像处理方法,其特征在于,基于获取到的所述空间距离确定所述第一子区域对应的显著性值的步骤包括:
    获取所述第一子区域对应的空间权值;
    基于所述空间距离以及所述空间权值确定所述第一子区域对应的显著性值。
  8. 如权利要求1所述的图像处理方法,其特征在于,所述基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值的步骤包括:
    获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
    基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
    其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
  9. 如权利要求2所述的图像处理方法,其特征在于,所述基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值的步骤包括:
    获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
    基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
    其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
  10. 如权利要求3所述的图像处理方法,其特征在于,所述基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每 一个像素点对应的目标显著性值的步骤包括:
    获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
    基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
    其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
  11. 如权利要求4所述的图像处理方法,其特征在于,所述基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值的步骤包括:
    获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
    基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
    其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
  12. 如权利要求5所述的图像处理方法,其特征在于,所述基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值的步骤包括:
    获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
    基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
    其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
  13. 如权利要求6所述的图像处理方法,其特征在于,所述基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每 一个像素点对应的目标显著性值的步骤包括:
    获取所述第一显著性值对应的第一权值,以及所述第二显著性值对应的第二权值;
    基于所述第一显著性值、第一权值、所述第二显著性值以及第二权值,计算所述目标显著性值;
    其中,所述第一权值与所述第二权值之和为1,所述第一权值的范围为0.35~0.45。
  14. 一种图像处理装置,其特征在于,所述图像处理装置包括:存储器、处理器及存储在所述存储器上并可在所述处理器上运行的计算机可读指令,所述计算机可读指令被所述处理器执行时,实现如下步骤:
    基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;
    基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值;
    基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值;
    基于所述目标显著性值确定所述待处理图像对应的显著图。
  15. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质上存储有计算机可读指令,所述计算机可读指令被处理器执行时,实现如下步骤:
    基于待处理图像的颜色数据,使用HC算法计算所述待处理图像中每一个像素点对应的第一显著性值;
    基于RC算法计算所述待处理图像中每一个像素点对应的第二显著性值;
    基于所述第一显著性值以及所述第二显著性值,计算所述待处理图像中每一个像素点对应的目标显著性值;
    基于所述目标显著性值确定所述待处理图像对应的显著图。
PCT/CN2019/079510 2018-10-25 2019-03-25 图像处理方法、装置及计算机可读存储介质 WO2020082686A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP19875761.9A EP3751510A4 (en) 2018-10-25 2019-03-25 IMAGE PROCESSING METHOD AND DEVICE AND COMPUTER-READABLE STORAGE MEDIUM
US16/976,687 US20210049786A1 (en) 2018-10-25 2019-03-25 Method and device of processing image, and computer readable storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811254503.6 2018-10-25
CN201811254503.6A CN109461130A (zh) 2018-10-25 2018-10-25 图像处理方法、装置及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2020082686A1 true WO2020082686A1 (zh) 2020-04-30

Family

ID=65608553

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/079510 WO2020082686A1 (zh) 2018-10-25 2019-03-25 图像处理方法、装置及计算机可读存储介质

Country Status (4)

Country Link
US (1) US20210049786A1 (zh)
EP (1) EP3751510A4 (zh)
CN (1) CN109461130A (zh)
WO (1) WO2020082686A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393458A (zh) * 2021-07-14 2021-09-14 华东理工大学 一种基于伤口加权显著性算法的手部伤口检测方法

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109461130A (zh) * 2018-10-25 2019-03-12 深圳创维-Rgb电子有限公司 图像处理方法、装置及计算机可读存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714537A (zh) * 2013-12-19 2014-04-09 武汉理工大学 一种图像显著性的检测方法
CN104091326A (zh) * 2014-06-16 2014-10-08 小米科技有限责任公司 图标分割方法和装置
CN105869173A (zh) * 2016-04-19 2016-08-17 天津大学 一种立体视觉显著性检测方法
CN109461130A (zh) * 2018-10-25 2019-03-12 深圳创维-Rgb电子有限公司 图像处理方法、装置及计算机可读存储介质

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2394851A (en) * 2002-10-30 2004-05-05 Hewlett Packard Co A camera having a user operable control for producing a saliency signal representative of a user's interest in a scene being imaged
US9042648B2 (en) * 2012-02-23 2015-05-26 Microsoft Technology Licensing, Llc Salient object segmentation
WO2013165565A1 (en) * 2012-04-30 2013-11-07 Nikon Corporation Method of detecting a main subject in an image
US10055850B2 (en) * 2014-09-19 2018-08-21 Brain Corporation Salient features tracking apparatus and methods using visual initialization
US9626584B2 (en) * 2014-10-09 2017-04-18 Adobe Systems Incorporated Image cropping suggestion using multiple saliency maps
US10262229B1 (en) * 2015-03-24 2019-04-16 Hrl Laboratories, Llc Wide-area salient object detection architecture for low power hardware platforms
CN108596921A (zh) * 2018-05-10 2018-09-28 苏州大学 图像显著区域检测的方法、装置、设备及可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103714537A (zh) * 2013-12-19 2014-04-09 武汉理工大学 一种图像显著性的检测方法
CN104091326A (zh) * 2014-06-16 2014-10-08 小米科技有限责任公司 图标分割方法和装置
CN105869173A (zh) * 2016-04-19 2016-08-17 天津大学 一种立体视觉显著性检测方法
CN109461130A (zh) * 2018-10-25 2019-03-12 深圳创维-Rgb电子有限公司 图像处理方法、装置及计算机可读存储介质

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
CHENG, MINGMING ET AL.: "Global Contrast based Salient Region Detection", IEEE CVPR 2011, 31 December 2011 (2011-12-31), pages 409 - 416, XP032037846 *
See also references of EP3751510A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113393458A (zh) * 2021-07-14 2021-09-14 华东理工大学 一种基于伤口加权显著性算法的手部伤口检测方法

Also Published As

Publication number Publication date
EP3751510A1 (en) 2020-12-16
EP3751510A4 (en) 2021-12-15
CN109461130A (zh) 2019-03-12
US20210049786A1 (en) 2021-02-18

Similar Documents

Publication Publication Date Title
US9418319B2 (en) Object detection using cascaded convolutional neural networks
US20190147361A1 (en) Learned model provision method and learned model provision device
WO2019041519A1 (zh) 目标跟踪装置、方法及计算机可读存储介质
US9129435B2 (en) Method for creating 3-D models by stitching multiple partial 3-D models
WO2018120460A1 (zh) 图像焦距检测方法、装置、设备及计算机可读存储介质
WO2019057041A1 (zh) 用于实现图像增强的方法、装置和电子设备
WO2017088804A1 (zh) 人脸图像中检测眼镜佩戴的方法及装置
CN110648363A (zh) 相机姿态确定方法、装置、存储介质及电子设备
US10122912B2 (en) Device and method for detecting regions in an image
WO2022105569A1 (zh) 页面方向识别方法、装置、设备及计算机可读存储介质
WO2022002262A1 (zh) 基于计算机视觉的字符序列识别方法、装置、设备和介质
WO2020082686A1 (zh) 图像处理方法、装置及计算机可读存储介质
WO2019148923A1 (zh) 一种以图搜图方法、装置、电子设备及存储介质
WO2018184255A1 (zh) 图像校正的方法和装置
CN108052869B (zh) 车道线识别方法、装置及计算机可读存储介质
CN110633712A (zh) 一种车身颜色识别方法、系统、装置及计算机可读介质
CN113947768A (zh) 一种基于单目3d目标检测的数据增强方法和装置
CN111199169A (zh) 图像处理方法和装置
CN110657760B (zh) 基于人工智能的测量空间面积的方法、装置及存储介质
WO2020000270A1 (zh) 一种图像处理方法、装置及系统
US11270152B2 (en) Method and apparatus for image detection, patterning control method
CN113470103B (zh) 车路协同中相机作用距离确定方法、装置和路侧设备
US11281890B2 (en) Method, system, and computer-readable media for image correction via facial ratio
WO2021238188A1 (zh) 图像配准方法及装置
CN111242187B (zh) 一种图像相似度处理方法、装置、介质和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19875761

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2019875761

Country of ref document: EP

Effective date: 20200909

NENP Non-entry into the national phase

Ref country code: DE