CN108960247B - Image significance detection method and device and electronic equipment - Google Patents

Image significance detection method and device and electronic equipment Download PDF

Info

Publication number
CN108960247B
CN108960247B CN201710363563.0A CN201710363563A CN108960247B CN 108960247 B CN108960247 B CN 108960247B CN 201710363563 A CN201710363563 A CN 201710363563A CN 108960247 B CN108960247 B CN 108960247B
Authority
CN
China
Prior art keywords
image
pixel
feature
saliency
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710363563.0A
Other languages
Chinese (zh)
Other versions
CN108960247A (en
Inventor
胡康康
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201710363563.0A priority Critical patent/CN108960247B/en
Publication of CN108960247A publication Critical patent/CN108960247A/en
Application granted granted Critical
Publication of CN108960247B publication Critical patent/CN108960247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The application discloses an image significance detection method, which comprises the following steps: segmenting a pixel feature block of an image; detecting an edge feature of the image; extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks; determining salient features of the image based on the extracted pixel feature block; and comparing the edge features with the salient features to determine salient regions of the images. The image saliency detection method combines image segmentation, image edge detection and image saliency feature detection, so that the detection of the image saliency region is more accurate, and the accuracy of the image saliency detection is improved.

Description

Image significance detection method and device and electronic equipment
Technical Field
The application relates to the technical field of image processing, in particular to an image saliency detection method. The application also relates to an image saliency detection device, another image saliency detection method and device, two electronic devices and two computer readable media.
Background
The automatic Layout in the planar Design (Layout) requires that elements in a picture are arranged according to a certain rule and reading habit, so that a picture effect meeting aesthetic requirements is obtained, while salience Detection (salience Detection) is a crucial link, and is a way for a computer to simulate a human visual system to understand an image scene, while human eyes always unconsciously concentrate attention on a part of regions in most interest when observing the image, and the task of the Saliency Detection is to find out a region which is easier to become the focus of human visual attention in the image. Image saliency detection is a subject of great interest, and is widely applied in the fields of automatic target positioning and segmentation, image information retrieval, image compression and the like, and computing resources can be concentrated on the most valuable information due to detection results. The Saliency of an image can be represented by a Saliency Map (salience Map), wherein the gray value of a pixel in the Saliency Map represents the Saliency of a corresponding area of the image, and the higher the brightness of the pixel is, the higher the Saliency is.
At present, the mainstream method for detecting the saliency of an image is a bottom-up algorithm based on the bottom-layer features (color, edge, texture, and the like) of the image, and the basic ideas include feature extraction, feature fusion, saliency calculation, saliency region segmentation, and the like. For example, in some artificially generated pictures, although some regions in the pictures have no features, the regions may still be mistakenly identified as salient regions, for example, the results of such algorithms usually generate higher salient values near the edges of objects in the pictures instead of uniformly highlighting the entire visually more salient objects. Therefore, the existing image saliency detection method has the problem of false detection, so that the accuracy of image saliency detection is low, and the accurate result of the image saliency detection is usually obtained by assisting later manual intervention processing.
Disclosure of Invention
The application provides an image significance detection method, which aims to solve the problem that in the prior art, the accuracy of image significance detection is low.
The application also relates to an image saliency detection device, another image saliency detection method and device, two electronic devices and two computer readable media.
The application provides an image saliency detection method, which comprises the following steps:
segmenting a pixel feature block of an image;
detecting an edge feature of the image;
extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks;
determining salient features of the image based on the extracted pixel feature block;
and comparing the edge features with the salient features to determine salient regions of the images.
Optionally, the determining the saliency feature of the image based on the extracted pixel feature block is implemented by:
aiming at least one pixel point of the image, the following operations are executed: calculating the significance characteristic value of the pixel point; normalizing the significant characteristic values of the pixel points; and carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the determining the saliency feature of the image based on the extracted pixel feature block is implemented by:
calculating the significance characteristic value of the pixel point of the image;
and carrying out binarization segmentation on the significant characteristic values of the pixel points, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the saliency characteristic value of the pixel point in the extracted pixel characteristic block is a lower limit value of the saliency characteristic value.
Optionally, the initial value of the saliency characteristic value of the image boundary pixel point is a lower limit value of the saliency characteristic value.
Optionally, except for the image boundary pixel points, the initial values of the saliency characteristic values of the other pixel points are the upper limit values of the saliency characteristic values.
Optionally, the saliency characteristic value of the pixel point is updated by iterative computation based on an initial value of the saliency characteristic value of the pixel point, and the saliency characteristic value of the pixel point is determined according to an iterative computation result;
if the iterative computation times are odd, updating the significance characteristic values of the pixel points according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel points;
and if the iterative computation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
Optionally, in the iterative computation process of the saliency characteristic values of the pixels, the pixels are scanned in a raster scanning manner, and the saliency characteristic values of the pixels are sequentially iteratively computed according to the sequence of scanning the pixels in the raster scanning manner.
Optionally, if the number of times of the iterative computation is an odd number, when the pixel points are scanned in the raster scanning manner, the pixel points are sequentially scanned from a first pixel point at the upper left corner to a last pixel point at the lower right corner of the image in a forward scanning manner;
and if the iterative computation times are even numbers, scanning the pixel points from the last pixel point at the lower right corner of the image to the first pixel point at the upper left corner in sequence according to a reverse scanning mode when scanning the pixel points in the raster scanning mode.
Optionally, the normalization of the significant characteristic values of the pixel points is implemented by the following method:
and normalizing the significance characteristic value of the pixel point to the gray value of the pixel point.
Optionally, the normalization is performed on the significant feature value by binarization segmentation, and the following method is adopted to implement:
judging whether the gray value of the pixel point is larger than a gray threshold value after normalization, if so, the pixel point is a significant pixel point of the image; and if not, the pixel points are non-significant pixel points of the image.
Optionally, the segmenting of the pixel feature block of the image is implemented by adopting the following method: and segmenting the pixel characteristic blocks of the image by adopting a Mean-shift image segmentation algorithm.
Optionally, the detecting the edge feature of the image is implemented by the following method: and detecting the edge characteristics of the image by adopting a Canny edge detection operator to obtain the gray value of the edge characteristics of the pixel points of the image.
Optionally, the extracting of the pixel feature blocks which meet the preset threshold from the segmented pixel feature blocks is implemented in the following manner:
and calculating the number of pixel points in the segmented pixel characteristic block, and if the number of the pixel points in the pixel characteristic block exceeds a threshold value of the number of the pixel points, extracting the pixel characteristic block.
Optionally, the extracting of the pixel feature blocks which meet the preset threshold from the segmented pixel feature blocks is implemented in the following manner:
and calculating the area of the segmented pixel feature block, and if the area of the pixel feature block is larger than or equal to an area threshold value, extracting the pixel feature block.
Optionally, the edge feature and the salient feature are compared, and the following method is adopted:
and performing OR operation on the gray value of the edge feature of the pixel point and the saliency feature value according to the gray value of the edge feature of the pixel point of the image obtained by detection and the saliency feature value of the pixel point determined by analysis, and determining the saliency region of the image according to the gray value of the pixel point of the image obtained by calculation.
The present application further provides an image saliency detection apparatus, including:
the image segmentation unit is used for segmenting the pixel feature block of the image;
an edge detection unit for detecting an edge feature of the image;
a pixel feature block extraction unit, configured to extract a pixel feature block that meets a preset threshold from among the segmented pixel feature blocks;
a saliency feature determination unit for determining a saliency feature of the image based on the extracted pixel feature block;
and the salient region determining unit is used for comparing the edge features with the salient features to determine the salient regions of the images.
The present application further provides an image saliency detection method, including:
detecting edge features of the image;
identifying pixel feature blocks in the image which meet a preset threshold;
determining salient features of the image based on the identified blocks of pixel features;
and comparing the edge features with the salient features to determine salient regions of the images.
Optionally, the determining the salient features of the image based on the identified pixel feature block is implemented by:
aiming at least one pixel point of the image, the following operations are executed:
calculating the significance characteristic value of the pixel point;
normalizing the significant characteristic values of the pixel points;
and carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the edge feature and the salient feature are compared, and the following method is adopted:
and performing OR operation on the gray value of the edge feature of the pixel point and the saliency feature value according to the gray value of the edge feature of the pixel point of the image obtained by detection and the saliency feature value of the pixel point determined by analysis, and determining the saliency region of the image according to the gray value of the pixel point of the image obtained by calculation.
The present application further provides an image saliency detection apparatus comprising:
an edge detection unit for detecting an edge feature of the image;
the pixel feature block identification unit is used for identifying a pixel feature block which meets a preset threshold value in the image;
a saliency feature determination unit for determining a saliency feature of the image based on the identified pixel feature block;
and the salient region determining unit is used for comparing the edge features with the salient features to determine the salient regions of the images.
The present application further provides an electronic device, comprising:
a memory, and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
segmenting a pixel feature block of an image;
detecting an edge feature of the image;
extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks;
determining salient features of the image based on the extracted pixel feature block;
and comparing the edge features with the salient features to determine salient regions of the images.
The present application additionally provides an electronic device comprising:
a memory, and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
detecting edge features of the image;
identifying pixel feature blocks in the image which meet a preset threshold;
determining salient features of the image based on the identified blocks of pixel features;
and comparing the edge features with the salient features to determine salient regions of the images.
The present application further provides a computer-readable medium having instructions stored thereon that are executable to:
segmenting a pixel feature block of an image;
detecting an edge feature of the image;
extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks;
determining salient features of the image based on the extracted pixel feature block;
and comparing the edge features with the salient features to determine salient regions of the images.
The present application additionally provides a computer-readable medium having instructions stored thereon that are executable to:
detecting edge features of the image;
identifying pixel feature blocks in the image which meet a preset threshold;
determining salient features of the image based on the identified blocks of pixel features;
and comparing the edge features with the salient features to determine salient regions of the images.
The image saliency detection method provided by the application comprises the following steps: segmenting a pixel feature block of an image; detecting an edge feature of the image; extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks; determining salient features of the image based on the extracted pixel feature block; and comparing the edge features with the salient features to determine salient regions of the images.
The image saliency detection method comprises the steps of firstly segmenting a pixel feature block of an image by combining segmentation of the pixel feature block of the image, detection of image edge features and detection of image saliency features, and determining the saliency features of the image according to the pixel feature block extracted from the segmented pixel feature block; simultaneously detecting edge features of the images; and finally, comparing the saliency characteristic of the image with the edge characteristic of the image, and determining the saliency area of the image according to the comparison result.
Drawings
FIG. 1 is a schematic diagram of an embodiment of an image saliency detection method provided by the present application;
FIG. 2 is a schematic illustration of an image provided herein;
FIG. 3 is a schematic diagram of a block of pixel features of an image provided herein;
FIG. 4 is a schematic diagram of an image edge feature provided by the present application;
FIG. 5 is a schematic illustration of a normalized saliency image provided by the present application;
FIG. 6 is a schematic diagram of a binarized saliency image provided by the present application;
FIG. 7 is a schematic diagram of a saliency detection result image provided by the present application;
FIG. 8 is a schematic diagram of an embodiment of an image saliency detection apparatus provided by the present application;
FIG. 9 is a schematic diagram of another embodiment of an image saliency detection method provided by the present application;
FIG. 10 is a schematic diagram of another embodiment of an image saliency detection apparatus provided by the present application;
FIG. 11 is a schematic diagram of an embodiment of an electronic device provided by the present application;
FIG. 12 is a schematic diagram of another embodiment of an electronic device provided herein.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. This application is capable of implementation in many different ways than those herein set forth and of similar import by those skilled in the art without departing from the spirit of this application and is therefore not limited to the specific implementations disclosed below.
The application provides an image saliency detection method, and also provides an image saliency detection device, another image saliency detection method and device, two electronic devices, and two computer readable media. The following detailed description and the description of the steps of the method are individually made with reference to the drawings of the embodiments provided in the present application.
The embodiment of the image saliency detection method provided by the application is as follows:
referring to fig. 1, a schematic diagram of an embodiment of an image saliency detection method provided by the present application is shown; referring to fig. 2, there is shown a schematic diagram of an image provided by the present application; referring to fig. 3, a schematic diagram of a pixel feature block of an image provided by the present application is shown; referring to fig. 4, a schematic diagram of an image edge feature provided by the present application is shown; referring to FIG. 5, a schematic diagram of a normalized saliency image provided by the present application is shown; referring to fig. 6, a schematic diagram of a binarized saliency image provided by the present application is shown; referring to fig. 7, a schematic diagram of a saliency detection result image provided by the present application is shown.
Step S101 is to divide a pixel feature block of an image.
In the automatic layout process of the planar design, the most basic requirement is that elements (characters or images) cannot be pasted on an object with a background image having a salient feature, so that the salient detection is required to be performed on the input background image, and thus a salient region of the background image is extracted as a constraint condition in the automatic layout of the planar design: that is, the salient region of the background image is a region to which an element cannot be pasted, and the non-salient region of the background image is a region to which an element can be pasted. According to the image saliency detection method provided by the application, the saliency region and the non-saliency region of the background image are detected and segmented by combining saliency detection, image segmentation and edge detection, so that the saliency region of the background image pasting element is detected and obtained.
The image in the embodiment of the present application refers to a background image for pasting text elements or image elements in a process of automatic layout of a flat panel design, and the image may be an artificially generated image or an image generated by shooting with a camera. For example, as shown in fig. 2, a possible situation is that a lottery bottle in an image is sold in an electronic shopping mall, generally speaking, the image of the lottery bottle itself needs to be displayed during selling, and often necessary text elements or image elements need to be added in the image as description information of the lottery bottle, so that a user can intuitively obtain as much information of the lottery bottle as possible. Another possible situation is that fig. 2 is a background image, a color bottle in the image is a part of the background image, but when a text element or an image element is added to the background image (fig. 2), in consideration of the aesthetic degree of the background image (fig. 2), the region where the color bottle is located (i.e., the salient region) should be avoided as much as possible when the text element or the image element is added, and the text element or the image element is added to the background region (i.e., the non-salient region) outside the region where the color bottle is located (i.e., the salient region).
The pixel feature block of the image refers to an area where pixel points contained in the image are continuous and gray values are uniform. The edge feature of the image refers to the image feature of the edge of an object in the image.
In this embodiment, when the pixel feature block of the image is segmented, the pixel feature block of the image is segmented by using a Mean-shift image segmentation algorithm, and each pixel feature block of the image is obtained by segmentation. The Mean-shift image segmentation algorithm is a density estimation algorithm based on a nonparametric kernel function, the probability density of the kernel function is increased progressively along the density gradient direction and finally converges to a local probability density maximum value point nearby, and the image can be divided into a plurality of areas with continuous pixel points and more uniform gray values through the Mean-shift image segmentation algorithm. For example, after the image shown in fig. 2 is segmented by using the Mean-shift image segmentation algorithm, the blocks of the pixel features of the image are shown in fig. 3.
It should be noted that, when the pixel feature block of the image is segmented, in addition to the Mean-shift image segmentation algorithm provided above, other image segmentation algorithms may be used to detect and segment each image pixel block of the image, which is not limited in this embodiment.
Step S102, detecting the edge characteristics of the image.
The image saliency detection method provided by the application further comprises the step of detecting the edge features of the image, and meanwhile, since the segmentation of the pixel feature block of the image and the detection of the edge features of the image are relatively independent processing processes, the process of detecting the edge features of the image can be executed in the process of executing, before executing or after executing the segmentation process of the pixel feature block of the image in step S101. Similarly, the process of detecting the edge feature of the image may also be performed during, before, or after the execution of steps S103 and S104 described below, but it is necessary to ensure that the process of detecting the edge feature of the image and obtaining the edge feature detection result of the image are performed before the execution of step S105 described below.
In this embodiment, a Canny edge detection operator is used to detect the edge features of the image. The Canny edge detection operator is a first derivative of a Gaussian function of the image, is an optimization approximation operator of a product of a signal-to-noise ratio and positioning, and is used for obtaining the gray value of the edge feature of the pixel point of the image after the image is detected, wherein the image corresponding to the gray value of the edge feature of the pixel point is a binary image, the gray value of the non-edge pixel point in the binary image is 0, and the gray value of the edge pixel point is 255. For example, the image shown in fig. 2, the edge feature of the image obtained by Canny edge detection operator is shown in fig. 4.
It should be noted that, when detecting the edge feature of the image, in addition to the Canny edge detection operator provided above, other edge detection algorithms for detecting the edge feature of the image may also be used to detect the image, so as to obtain the edge feature of the image, which is not limited in this embodiment.
And step S103, extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks.
The step S101 is to segment the pixel feature blocks of the image to obtain each pixel feature block of the image; on the basis, the step extracts the pixel feature blocks of the image meeting the preset threshold. The purpose of extracting the pixel feature block of the image satisfying the preset threshold value is as follows: if the extracted pixel feature blocks are set as background features, the fact that large blocks of featureless regions (pixel feature blocks meeting a preset threshold value) are detected as saliency regions in the saliency detection process is avoided, and therefore the accuracy of saliency detection is improved.
In a specific implementation, the preset threshold may be a pixel point number threshold for restricting the number of pixel points in the extracted pixel block, for example, if the pixel point number threshold is 10000, a pixel feature block with the number of pixel points exceeding 10000 of the image is extracted. Specifically, when extracting the pixel feature blocks of which the images meet a preset threshold, firstly calculating the number of pixel points in each pixel feature block obtained by segmentation in the step, judging according to the calculation result, and if the number of the pixel points in the pixel feature blocks exceeds the threshold of the number of the pixel points, extracting the pixel feature blocks; and if the number of the pixel points in the pixel feature block does not exceed the threshold value of the number of the pixel points, not processing.
In addition, the preset threshold may also be an area threshold that restricts the size of the extracted pixel block, for example, if the area threshold is 50 × 50, then the pixel feature block whose image area is greater than 50 × 50 (the number of pixels in the length and width directions of the pixel block is greater than 50 at the same time) is extracted. Specifically, when extracting the pixel feature blocks of which the images meet a preset threshold, firstly calculating the area of each pixel feature block obtained by segmentation in the above steps, judging according to the calculation result, and if the area of the pixel feature block is larger than or equal to the area threshold, extracting the pixel feature block; and if the area of the pixel feature block is smaller than the area threshold, not processing.
And step S104, determining the salient features of the image based on the extracted pixel feature blocks.
The saliency characteristic of the image in the embodiment of the application refers to a saliency characteristic value of a pixel point of the image. In addition, the salient features of the image may be characterized by other features than the salient feature values, but whatever features are used to characterize the salient features of the image, the salient features of the image may be analytically determined with reference to implementations provided below.
Specifically, this step determines the salient features of the image based on the extracted pixel feature block, and is implemented in the following manner:
aiming at each pixel point of the image, the following operations are executed:
1) calculating the significance characteristic value of the pixel point;
as described above, the purpose of extracting the pixel feature block of the image satisfying the preset threshold in step S102 is to set it as a non-significant region, so here, before analyzing and calculating the significant feature value of the pixel point of the image, the significant feature value of the pixel point in the extracted pixel feature block is set as the lower limit value of the significant feature value (for example, set as 0).
The saliency characteristic value of a pixel point of the image is obtained through multiple iterative computations, so that the saliency characteristic value of the pixel point is required to have an initial value, the saliency characteristic value of the pixel point is updated through the iterative computation process on the basis of the initial value, and finally the saliency characteristic value of the pixel point is determined according to the iterative computation result after the iterative computation is completed. According to the characteristics of the image, in a specific implementation, an initial value of a saliency characteristic value of the image boundary pixel point may be set as a lower limit value of the saliency characteristic value (for example, set to 0); accordingly, the initial values of the saliency feature values of the remaining pixels other than the image boundary pixels may be set to the upper limit value of the saliency feature value (for example, to + ∞).
Specifically, in the iterative computation process of the significant characteristic value, if the iterative computation times are odd, the significant characteristic value of the pixel point is updated according to the significant characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel point; and if the iterative calculation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
On this basis, in order to improve the calculation efficiency and accelerate the iterative calculation process of the saliency characteristic value, in practical application, the pixel points of the image can be scanned in a raster scanning manner in the iterative calculation process of the saliency characteristic value, and the saliency characteristic values of the pixel points are sequentially subjected to iterative calculation according to the scanning sequence of the pixel points.
Specifically, if the number of times of iterative computation is an odd number, that is, the odd number of times scans the pixel points, the first pixel point at the upper left corner of the image is sequentially scanned to the last pixel point at the lower right corner in a positive sequence scanning manner, that is: scanning a horizontal line from the first pixel point at the upper left corner to the right, scanning all pixel points in the first line, then retracing to the second pixel point right below the first pixel point at the upper left corner, scanning the second horizontal line to the right, and scanning the last line of pixel points below the image according to the mode, thus finishing the scanning of the image. And when scanning one pixel point in the scanning process, updating the significance characteristic value of the pixel point according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel point.
Correspondingly, if the number of times of iterative computation is even, that is, the pixel points are scanned for the second even number of times, the last pixel point at the lower right corner of the image is sequentially scanned to the first pixel point at the upper left corner in a reverse scanning mode (a mode opposite to the scanning path of the forward scanning mode). And when scanning one pixel point in the scanning process, updating the significance characteristic value of the pixel point according to the significance characteristic values of the adjacent pixel point at the right side and the adjacent pixel point at the lower side of the pixel point.
2) Normalizing the significant characteristic values of the pixel points;
in this embodiment, normalizing the saliency characteristic values of the pixels means normalizing the saliency characteristic values of the pixels to the gray scale values of the pixels, that is: and converting the significance characteristic value of the pixel point obtained by the iterative computation into a gray value, and visually determining the significance of the pixel point of the image and the significance of the image through the gray value. For example, as shown in fig. 2, in combination with the pixel feature block extracted from the image, the number of the pixels of which exceeds the threshold of the number of the pixels, the saliency feature value of the pixel is obtained through multiple iterative computations, and after the saliency feature value of the pixel is normalized to the gray value, the image corresponding to the gray value of the pixel is as shown in fig. 5.
3) And carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
In order to determine the saliency characteristics of the image more intuitively, the gray value of the pixel point obtained by normalization in the above steps can be binarized, specifically, whether the gray value of the pixel point after normalization is larger than a gray threshold value is judged, and if so, the pixel point is a saliency pixel point; if not, the pixel points are non-significant pixel points. For example, as shown in fig. 5, the image corresponding to the gray-scale value of the pixel obtained by normalization is obtained by binarizing the gray-scale value of the intermediate pixel, the gray-scale value of the pixel with the gray-scale value greater than 100 is set to 255, the gray-scale value of the pixel with the gray-scale value less than or equal to 100 is set to 0, and the image corresponding to the gray-scale value of the pixel of the binarized image is shown in fig. 6.
In this embodiment, the above operation is performed on each pixel point of the image, but in practical application, in order to save computing resources and improve the processing efficiency of the image, a part of the pixel points of the image may also be selected to perform the above operation, for example, according to the arrangement sequence of the pixel points of the image, after the above operation is performed on one pixel point, the next pixel point is not processed, and so on, the above operation is performed on the remaining pixel points of the image in a manner that one pixel point is processed and the next pixel point is not processed; in addition, partial pixel points of the image can be selected through a random algorithm to execute the operation.
In a specific implementation, besides the foregoing implementation, the determining the salient features of the image based on the extracted pixel feature block may also be implemented by using other implementations, such as: calculating the significance characteristic value of the pixel point of the image; and carrying out binarization segmentation on the significant characteristic values of the pixel points, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Step S105, comparing the edge features with the salient features, and determining salient regions of the images.
According to the edge features of the image obtained through detection in the step and the salient features of the image obtained through analysis in the step, the edge features and the salient features are compared, and the salient region of the image is determined according to the comparison result.
In specific implementation, the step compares the edge feature of the image with the saliency feature, specifically, performs or operation on the gray value of the edge feature of the pixel and the saliency feature value according to the gray value of the edge feature of the pixel obtained by detection and the saliency feature value of the pixel determined by analysis, and determines the saliency region of the image according to the gray value of the pixel obtained by calculation. For example, the edge feature of the image obtained by Canny edge detection operator detection shown in fig. 4 and the image corresponding to the gray value of the pixel point of the binarized image shown in fig. 6 are subjected to or operation on the gray value of the edge feature of the pixel point and the gray value after binarization aiming at each pixel point of the image, and the calculation result is used as the final gray value of the pixel point; by analogy, after all the pixel points of the image are subjected to or operation, the image corresponding to the gray value of the pixel point of the image is as shown in fig. 7.
The following describes in detail a specific processing procedure for performing saliency detection on a background image by combining saliency detection, image segmentation and edge detection as described above with reference to the background image shown in fig. 2.
As shown in the background image of fig. 2, the color bottles contained in the background image are elements with higher significance, i.e. the area where the color bottles are located is the significant area of the background image, and the area with lower significance refers to other areas except the area where the color bottles are located; in addition, adjacent areas between objects in the background image have high saliency characteristics, and if text elements or image elements are added to these areas, the added text elements or image elements may be affected when viewed by a user due to the saliency of the adjacent areas, so that the adjacent areas should not be used as non-saliency areas to add text elements or image elements.
Specifically, in the process of detecting the saliency region of the background image, firstly, a Mean-shift image segmentation algorithm is adopted to segment pixel feature blocks of the background image, the obtained pixel feature blocks by segmentation are shown in fig. 3, and the pixel feature blocks are pixel feature blocks 301 to 310 in sequence from left to right from top to bottom; particularly, since the color bottles included in the background image are complex in constituent elements, a series of fine pixel feature blocks are obtained after the image area where the color bottles are located is segmented by the Mean-shift image segmentation algorithm, and it can be seen from observing fig. 3 that the fine pixel feature blocks jointly form the basic outline shape of the color bottles, but since the number of pixel feature blocks in the image area where the color bottles are located is too large, the description is not given in a row, and only the pixel feature blocks 301 to 310 are taken as an example, and the pixel feature blocks in the image area where the color bottles are located are consistent with the processing of the pixel feature blocks 301 to 310 in the processing, so that the description of the pixel feature blocks 301 to 310 can be referred to.
After the pixel feature blocks 301 to 310 are obtained by segmenting the background image, the number of pixel points of each pixel feature block is respectively calculated for the pixel feature blocks 301 to 310, the number of pixel points of the pixel feature blocks 301 to 310 obtained by calculation is sequentially N1 to N10, whether the number of pixel points N1 to N10 of the pixel feature blocks 301 to 310 is greater than the threshold value N0 of the number of pixel points is further judged, and the judgment result shows that the number of pixel points N6, N7, N8 and N10 of the pixel feature blocks 306, 307, 308 and 310 is greater than the threshold value N0 of the number of pixel points, and the pixel feature blocks 306, 307, 308 and 310 are extracted.
After the pixel feature blocks 306, 307, 308 and 310 are extracted, the pixel feature blocks 306, 307, 308 and 310 are set as background regions in the background image, i.e. non-salient regions; as can be seen from fig. 3, the pixel feature blocks 306, 307, 308 and 310 are relatively large featureless regions included in the background image, and it is obviously reasonable to add text elements or image elements to the regions where the pixel feature blocks 306, 307, 308 and 310 are located in the automatic layout process of the flat panel design, and it is logical to set the pixel feature blocks 306, 307, 308 and 310 as non-salient regions.
After the pixel feature blocks 306, 307, 308, and 310 are extracted, on the basis that the pixel feature blocks 306, 307, 308, and 310 are set as the non-significant regions, the significant distance of each pixel point of the background image is calculated (the calculation of the significant distance is performed on the basis that the pixel feature blocks 306, 307, 308, and 310 are set as the non-significant regions), and after the significant distances of all pixel points of the background image are obtained by calculation, the significant distances of the pixel points are normalized to the gray value of the pixel point, that is: normalizing the significance distance of the pixel points to be within the interval of 0-255, wherein after normalization, each pixel point has a gray value per se, and the image corresponding to the pixel point is shown in the attached figure 5;
further, binary segmentation is performed on the gray value of the normalized pixel point, wherein the gray value of the pixel point with the gray value larger than 100 is set to be 255, and the gray value of the pixel point with the gray value smaller than or equal to 100 is set to be 0, then after the binary segmentation, each pixel point has a new gray value (0 or 255), at this time, the gray value of the pixel point corresponds to the image as shown in fig. 6, wherein the pixel points with the larger gray value (the gray value of 255) refer to the white pixel points in fig. 6, namely the salient pixel points, and the salient pixel points jointly form a salient feature region which can be used for adding text elements or image elements; correspondingly, the pixels with smaller gray values (gray value is 0) refer to the black pixels in fig. 6, that is, the non-significant pixels, and these non-significant pixels together form a non-significant feature region, which cannot be used for adding text elements or image elements, and is consistent with the result of the visual observation with naked eyes.
In the process of detecting the salient region and the non-salient region of the background image, the edge of the background image needs to be detected, specifically, a Canny edge detection operator is used to detect the edge feature of the background image, see the edge feature of the background image shown in fig. 4, where the gray value of the edge pixel point of the background image is 255, the gray value of the non-edge pixel point is 0, and these edge pixel points together form the edge contour of the background image.
And finally, calculating the gray value of the pixel point by taking the pixel point as a unit according to the significant pixel point and the non-significant pixel point obtained by the processing and the edge pixel point and the non-edge pixel point obtained by the processing, so as to obtain a final gray value (0 or 255) of the pixel point, wherein the gray value is the final gray value of the pixel point of the background image after the processing, the image corresponding to the gray value of the pixel point is shown in fig. 7, and the gray value is a basis for finally determining the pixel point as a significant pixel point (the gray value is 255) and a non-significant pixel point (the gray value is 0), so as to determine a significant region and a non-significant region of the background image.
As can be seen from the above analysis, the image saliency detection method provided by the application determines the constraint condition based on the result of image segmentation on the basis of the background image, performs saliency detection according to the determined constraint condition, obtains the result of saliency detection, avoids interference of complicated and fine object features contained in the background image on saliency detection, and also avoids detecting a large non-feature region (non-saliency region) as a saliency region in the saliency detection process, so that the processing of the image is more accurate, and the accuracy of the image saliency detection is improved; meanwhile, carrying out edge feature detection on the background image to determine the edge feature of the background image; and finally, calculating according to the result of the saliency detection and the edge characteristics, and finally determining the saliency region and the non-saliency region of the background image according to the gray value of the pixel point of the background image obtained by calculation, so that the accuracy of the saliency detection of the image is further improved. Therefore, the image saliency detection method provided by the application is not a simple combination of the three means of image segmentation, saliency detection and edge detection feature detection, and the obtained effect is not a superposition of the technical effects which can be obtained by the three means. In summary, according to the image saliency detection method provided by the present application, by combining three means, namely segmentation of a pixel feature block of an image, detection of image edge features, and detection of image saliency features, the pixel feature block of the image is segmented first, and the saliency features of the image are analyzed and determined according to the pixel feature block extracted from the pixel feature block obtained by segmentation; simultaneously detecting edge features of the images; and finally, comparing the saliency characteristic of the image with the edge characteristic of the image, and determining the saliency area of the image according to the comparison result, wherein the detection of the saliency area of the image by the image saliency detection method is more accurate, and the accuracy of the image saliency detection is improved.
The embodiment of the image saliency detection device provided by the application is as follows:
in the foregoing embodiment, an image saliency detection method is provided, and accordingly, the present application also provides an image saliency detection apparatus, which is described below with reference to the accompanying drawings.
Referring to fig. 8, a schematic diagram of an embodiment of an image saliency detection apparatus provided by the present application is shown.
Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to the corresponding description of the method embodiments provided above for relevant portions. The device embodiments described below are merely illustrative.
The application provides an image saliency detection device, includes:
an image segmentation unit 801 configured to segment a pixel feature block of an image;
an edge detection unit 802, configured to detect an edge feature of the image;
a pixel feature block extraction unit 803, configured to extract a pixel feature block that meets a preset threshold from among the segmented pixel feature blocks;
a salient feature determination unit 804 for determining salient features of the image based on the extracted pixel feature blocks;
a salient region determining unit 805, configured to compare the edge feature with the salient feature, and determine a salient region of the image.
Optionally, the significant feature determining unit 804 includes:
the significant characteristic value operator unit is used for calculating the significant characteristic value of the pixel point;
the normalization subunit is used for normalizing the significance characteristic values of the pixel points;
a binarization segmentation subunit, configured to perform binarization segmentation on the normalized significant feature value, and determine significant pixel points and non-significant pixel points of the image according to a segmentation result;
and operating the significant characteristic value operator unit, the normalization subunit and the binarization segmentation subunit aiming at least one pixel point of the image.
Optionally, the significant feature determining unit 804 includes:
the first subunit is used for calculating the significance characteristic value of the pixel point of the image;
and the second subunit is used for carrying out binarization segmentation on the significant characteristic values of the pixel points and determining significant pixel points and non-significant pixel points of the image according to segmentation results.
Optionally, the saliency characteristic value of the pixel point in the extracted pixel characteristic block is a lower limit value of the saliency characteristic value.
Optionally, the initial value of the saliency characteristic value of the image boundary pixel point is a lower limit value of the saliency characteristic value.
Optionally, except for the image boundary pixel points, the initial values of the saliency characteristic values of the other pixel points are the upper limit values of the saliency characteristic values.
Optionally, the saliency characteristic value of the pixel point is updated by iterative computation based on an initial value of the saliency characteristic value of the pixel point, and the saliency characteristic value of the pixel point is determined according to an iterative computation result;
if the iterative computation times are odd, updating the significance characteristic values of the pixel points according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel points;
and if the iterative computation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
Optionally, in the iterative computation process of the saliency characteristic values of the pixels, the pixels are scanned in a raster scanning manner, and the saliency characteristic values of the pixels are sequentially iteratively computed according to the sequence of scanning the pixels in the raster scanning manner.
Optionally, if the number of times of the iterative computation is an odd number, when the pixel points are scanned in the raster scanning manner, the pixel points are sequentially scanned from a first pixel point at the upper left corner to a last pixel point at the lower right corner of the image in a forward scanning manner;
and if the iterative computation times are even numbers, scanning the pixel points from the last pixel point at the lower right corner of the image to the first pixel point at the upper left corner in sequence according to a reverse scanning mode when scanning the pixel points in the raster scanning mode.
Optionally, the normalization subunit is specifically configured to normalize the saliency characteristic value of the pixel to the gray value of the pixel.
Optionally, the binarization segmentation subunit is specifically configured to determine whether a gray value of the pixel point after normalization is greater than a gray threshold, and if so, the pixel point is a significant pixel point of the image; and if not, the pixel points are non-significant pixel points of the image.
Optionally, the image segmentation unit 801 includes:
and the image segmentation subunit is used for segmenting the pixel feature block of the image by adopting a Mean-shift image segmentation algorithm.
Optionally, the edge detecting unit 802 includes:
and the edge detection subunit is used for detecting the edge characteristics of the image by adopting a Canny edge detection operator to obtain the gray value of the edge characteristics of the pixel points of the image.
Optionally, the pixel feature block extraction unit 803 includes:
and the first extraction subunit is used for calculating the number of the pixel points in the segmented pixel characteristic block, and extracting the pixel characteristic block if the number of the pixel points in the pixel characteristic block exceeds a threshold value of the number of the pixel points.
Optionally, the pixel feature block extraction unit 803 includes:
and the second extraction subunit is used for calculating the area of the segmented pixel feature block, and extracting the pixel feature block if the area of the pixel feature block is larger than or equal to an area threshold value.
Optionally, the saliency feature determination unit 804 is specifically configured to perform or operation on the gray scale value of the edge feature of the pixel point of the image according to the gray scale value of the edge feature of the pixel point of the image obtained through detection and the saliency feature value of the pixel point determined through analysis, and determine the saliency region of the image according to the gray scale value of the pixel point of the image obtained through calculation.
Another embodiment of an image saliency detection method provided by the present application is as follows:
in the above embodiment, an image saliency detection method is provided, and in addition, another image saliency detection method is provided in the present application, which is described below with reference to the accompanying drawings.
Referring to fig. 9, a schematic diagram of an embodiment of an image saliency detection method provided by the present application is shown; referring to fig. 2, there is shown a schematic diagram of an image provided by the present application; referring to fig. 3, a schematic diagram of a pixel feature block of an image provided by the present application is shown; referring to fig. 4, a schematic diagram of an image edge feature provided by the present application is shown; referring to FIG. 5, a schematic diagram of a normalized saliency image provided by the present application is shown; referring to fig. 6, a schematic diagram of a binarized saliency image provided by the present application is shown; referring to fig. 7, a schematic diagram of a saliency detection result image provided by the present application is shown.
Since the present method embodiment is similar to the above method embodiment, the description is simple, and the relevant portions only need to refer to the corresponding description of the method embodiment provided above. The present method embodiments described below are merely illustrative. Specifically, the difference between the current method embodiment and the method embodiment provided above is that: in the embodiment of the method provided by the application, the pixel feature blocks of the image are segmented in an image segmentation mode, the pixel feature blocks meeting a preset threshold value are extracted from the segmented pixel feature blocks, and the saliency features of the image are determined based on the extracted pixel feature blocks; in the embodiment of the current method, a pixel feature block which meets a preset threshold value in an image is identified in an image identification mode, and the saliency feature of the image is determined based on the identified pixel feature block.
Step S901, detecting edge features of an image;
in this embodiment, a Canny edge detection operator is used to detect the edge features of the image. The Canny edge detection operator is a first derivative of a Gaussian function of the image, is an optimization approximation operator of a product of a signal-to-noise ratio and positioning, and is used for obtaining the gray value of the edge feature of the pixel point of the image after the image is detected, wherein the image corresponding to the gray value of the edge feature of the pixel point is a binary image, the gray value of the non-edge pixel point in the binary image is 0, and the gray value of the edge pixel point is 255. For example, the image shown in fig. 2, the edge feature of the image obtained by Canny edge detection operator is shown in fig. 4.
It should be noted that, when detecting the edge feature of the image, in addition to the Canny edge detection operator provided above, other edge detection algorithms for detecting the edge feature of the image may also be used to detect the image, so as to obtain the edge feature of the image, which is not limited in this embodiment.
Step S902, identifying pixel characteristic blocks which meet a preset threshold value in the image;
in practical applications, the identification of the pixel feature blocks in the image that satisfy the preset threshold may be implemented by using an existing image identification algorithm. Specifically, in identifying the pixel feature block in the image that meets the preset threshold, an image identification algorithm may be first used to identify the pixel feature block in the image, and further calculate the number of pixel points included in the identified pixel feature block or calculate the area of the identified pixel feature block, and finally determine the pixel feature block in which the number of pixel points or the area of the identified pixel feature block meets the preset threshold. In addition, if there is an image recognition algorithm that recognizes a pixel feature block satisfying a preset threshold in an image according to an input condition (the number of pixel points or the area included in the pixel feature block satisfies the preset threshold), it is sufficient to directly recognize the pixel feature block satisfying the preset threshold in the image according to the image recognition algorithm.
Step S903, determining the salient features of the image based on the identified pixel feature blocks;
specifically, this step determines the salient features of the image based on the identified pixel feature block, and is implemented as follows:
aiming at each pixel point of the image, the following operations are executed:
1) calculating the significance characteristic value of the pixel point;
the pixel feature block of the image satisfying the preset threshold is identified to be set as a non-significant region, so that the significant feature value of the pixel point in the identified pixel feature block is set as a lower limit value (for example, 0) of the significant feature value before the significant feature value of the pixel point is calculated.
The significant characteristic value of the pixel point is obtained through multiple iterative computations, so that the significant characteristic value of the pixel point is required to have an initial value, the significant characteristic value of the pixel point is updated through an iterative computation process on the basis of the initial value, and finally, after the iterative computations are completed, the significant characteristic value of the pixel point is determined according to the iterative computation result. According to the characteristics of the image, in a specific implementation, an initial value of a saliency characteristic value of the image boundary pixel point may be set as a lower limit value of the saliency characteristic value (for example, set to 0); accordingly, the initial values of the saliency feature values of the remaining pixels other than the image boundary pixels may be set to the upper limit value of the saliency feature value (for example, to + ∞).
Specifically, in the iterative computation process of the significant characteristic value, if the iterative computation times are odd, the significant characteristic value of the pixel point is updated according to the significant characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel point; and if the iterative calculation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
On this basis, in order to improve the calculation efficiency and accelerate the iterative calculation process of the saliency characteristic value, in practical application, the pixel points of the image can be scanned in a raster scanning manner in the iterative calculation process of the saliency characteristic value, and the saliency characteristic values of the pixel points are sequentially subjected to iterative calculation according to the scanning sequence of the pixel points.
Specifically, if the number of times of iterative computation is an odd number, that is, the odd number of times scans the pixel points, the first pixel point at the upper left corner of the image is sequentially scanned to the last pixel point at the lower right corner in a positive sequence scanning manner, that is: scanning a horizontal line from the first pixel point at the upper left corner to the right, scanning all pixel points in the first line, then retracing to the second pixel point right below the first pixel point at the upper left corner, scanning the second horizontal line to the right, and scanning the last line of pixel points below the image according to the mode, thus finishing the scanning of the image. And when scanning one pixel point in the scanning process, updating the significance characteristic value of the pixel point according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel point.
Correspondingly, if the number of times of iterative computation is even, that is, the pixel points are scanned for the second even number of times, the last pixel point at the lower right corner of the image is sequentially scanned to the first pixel point at the upper left corner in a reverse scanning mode (a mode opposite to the scanning path of the forward scanning mode). And when scanning one pixel point in the scanning process, updating the significance characteristic value of the pixel point according to the significance characteristic values of the adjacent pixel point at the right side and the adjacent pixel point at the lower side of the pixel point.
2) Normalizing the significant characteristic values of the pixel points;
in this embodiment, normalizing the saliency characteristic values of the pixels means normalizing the saliency characteristic values of the pixels to the gray scale values of the pixels, that is: and converting the significance characteristic value of the pixel point obtained by the iterative computation into a gray value, and visually determining the significance of the pixel point of the image and the significance of the image through the gray value. For example, as shown in fig. 2, based on the identified pixel feature block that satisfies the preset threshold, after the saliency feature values of the pixel points are normalized to the gray values, the image corresponding to the gray values of the pixel points is as shown in fig. 5, where the saliency feature values of the pixel points are obtained through multiple iterative computations.
3) And carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
In order to determine the saliency characteristics of the image more intuitively, the gray value of the pixel point obtained by normalization in the above steps can be binarized, specifically, whether the gray value of the pixel point after normalization is larger than a gray threshold value is judged, and if so, the pixel point is a saliency pixel point; if not, the pixel points are non-significant pixel points. For example, as shown in fig. 5, the image corresponding to the gray-scale value of the pixel obtained by normalization is obtained by binarizing the gray-scale value of the intermediate pixel, the gray-scale value of the pixel with the gray-scale value greater than 100 is set to 255, the gray-scale value of the pixel with the gray-scale value less than or equal to 100 is set to 0, and the image corresponding to the gray-scale value of the pixel of the binarized image is shown in fig. 6.
Step S904, comparing the edge feature with the salient feature, and determining a salient region of the image.
In specific implementation, the step compares the edge feature with the saliency feature, specifically, performs an or operation on the gray value of the edge feature of the pixel and the saliency feature value according to the gray value of the edge feature of the pixel obtained by detection and the determined saliency feature value of the pixel, and determines the saliency region of the image according to the gray value of the pixel obtained by calculation. For example, the edge feature of the image obtained by Canny edge detection operator detection shown in fig. 4 and the image corresponding to the gray value of the pixel point of the binarized image shown in fig. 6 are subjected to or operation on the gray value of the edge feature of the pixel point and the gray value after binarization aiming at each pixel point of the image, and the calculation result is used as the final gray value of the pixel point; by analogy, after all the pixel points of the image are subjected to or operation, the image corresponding to the gray value of the pixel point of the image is as shown in fig. 7.
The embodiment of the image saliency detection device provided by the application is as follows:
in the foregoing embodiment, an image saliency detection method is provided, and accordingly, the present application also provides an image saliency detection apparatus, which is described below with reference to the accompanying drawings.
Referring to fig. 10, a schematic diagram of an embodiment of an image saliency detection apparatus provided by the present application is shown.
Since the apparatus embodiments are substantially similar to the method embodiments, they are described in a relatively simple manner, and reference may be made to the corresponding description of the method embodiments provided above for relevant portions. The device embodiments described below are merely illustrative.
The application provides an image saliency detection device, includes:
an edge detection unit 10-01 for detecting an edge feature of the image;
a pixel feature block identification unit 10-02 for identifying a pixel feature block satisfying a preset threshold in the image;
a salient feature determining unit 10-03 for determining salient features of the image based on the identified pixel feature blocks;
a salient region determining unit 10-04, configured to compare the edge feature with the salient feature, and determine a salient region of the image.
Optionally, the significant feature determining unit 10-03 includes:
the significant characteristic value operator unit is used for calculating the significant characteristic value of the pixel point;
the normalization subunit is used for normalizing the significance characteristic values of the pixel points;
a binarization segmentation subunit, configured to perform binarization segmentation on the normalized significant feature value, and determine significant pixel points and non-significant pixel points of the image according to a segmentation result;
and operating the significant characteristic value operator unit, the normalization subunit and the binarization segmentation subunit aiming at least one pixel point of the image.
Optionally, the significant feature determining unit 10-03 includes:
the first subunit is used for calculating the significance characteristic value of the pixel point of the image;
and the second subunit is used for carrying out binarization segmentation on the significant characteristic values of the pixel points and determining significant pixel points and non-significant pixel points of the image according to segmentation results.
Optionally, the saliency feature value of the pixel point within the identified pixel feature block is a lower limit value of the saliency feature value.
Optionally, the initial value of the saliency characteristic value of the image boundary pixel point is a lower limit value of the saliency characteristic value.
Optionally, except for the image boundary pixel points, the initial values of the saliency characteristic values of the other pixel points are the upper limit values of the saliency characteristic values.
Optionally, the saliency characteristic value of the pixel point is updated by iterative computation based on an initial value of the saliency characteristic value of the pixel point, and the saliency characteristic value of the pixel point is determined according to an iterative computation result;
if the iterative computation times are odd, updating the significance characteristic values of the pixel points according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel points;
and if the iterative computation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
Optionally, in the iterative computation process of the saliency characteristic values of the pixels, the pixels are scanned in a raster scanning manner, and the saliency characteristic values of the pixels are sequentially iteratively computed according to the sequence of scanning the pixels in the raster scanning manner.
Optionally, if the number of times of the iterative computation is an odd number, when the pixel points are scanned in the raster scanning manner, the pixel points are sequentially scanned from a first pixel point at the upper left corner to a last pixel point at the lower right corner of the image in a forward scanning manner;
and if the iterative computation times are even numbers, scanning the pixel points from the last pixel point at the lower right corner of the image to the first pixel point at the upper left corner in sequence according to a reverse scanning mode when scanning the pixel points in the raster scanning mode.
Optionally, the normalization subunit is specifically configured to normalize the saliency characteristic value of the pixel to the gray value of the pixel.
Optionally, the binarization segmentation subunit is specifically configured to determine whether a gray value of the pixel point after normalization is greater than a gray threshold, and if so, the pixel point is a significant pixel point of the image; and if not, the pixel points are non-significant pixel points of the image.
Optionally, the edge detecting unit 10-01 is specifically configured to detect the edge feature of the image by using a Canny edge detection operator, and obtain a gray value of the edge feature of a pixel point of the image.
Optionally, the saliency region determination unit 10-04 is specifically configured to perform or operation on the gray scale value of the edge feature of the pixel point of the image obtained through detection and the saliency feature value of the pixel point determined through analysis, and determine the saliency region of the image according to the gray scale value of the pixel point of the image obtained through calculation.
The embodiment of the electronic equipment provided by the application is as follows:
in the foregoing embodiment, an image saliency detection method is provided, and in addition, an electronic device for implementing the image saliency detection method is provided, which is described below with reference to the accompanying drawings.
Referring to fig. 11, a schematic diagram of an electronic device provided in this embodiment is shown.
The embodiments of the electronic device provided in the present application are described more simply, and for relevant portions, reference may be made to the corresponding descriptions of the embodiments of the image saliency detection method provided above. The embodiments described below are merely illustrative.
The application provides an electronic device, including:
a memory 11-01, and a processor 11-02;
the memory 11-01 is configured to store computer-executable instructions, and the processor 11-02 is configured to execute the computer-executable instructions to:
segmenting a pixel feature block of an image;
detecting an edge feature of the image;
extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks;
determining salient features of the image based on the extracted pixel feature block;
and comparing the edge features with the salient features to determine salient regions of the images.
Optionally, the determining the saliency feature of the image based on the extracted pixel feature block is implemented by:
aiming at least one pixel point of the image, the following operations are executed: calculating the significance characteristic value of the pixel point; normalizing the significant characteristic values of the pixel points; and carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the determining the saliency feature of the image based on the extracted pixel feature block is implemented by:
calculating the significance characteristic value of the pixel point of the image;
and carrying out binarization segmentation on the significant characteristic values of the pixel points, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the saliency characteristic value of the pixel point in the extracted pixel characteristic block is a lower limit value of the saliency characteristic value.
Optionally, the initial value of the saliency characteristic value of the image boundary pixel point is a lower limit value of the saliency characteristic value.
Optionally, except for the image boundary pixel points, the initial values of the saliency characteristic values of the other pixel points are the upper limit values of the saliency characteristic values.
Optionally, the saliency characteristic value of the pixel point is updated by iterative computation based on an initial value of the saliency characteristic value of the pixel point, and the saliency characteristic value of the pixel point is determined according to an iterative computation result;
if the iterative computation times are odd, updating the significance characteristic values of the pixel points according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel points;
and if the iterative computation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
Optionally, in the iterative computation process of the saliency characteristic values of the pixels, the pixels are scanned in a raster scanning manner, and the saliency characteristic values of the pixels are sequentially iteratively computed according to the sequence of scanning the pixels in the raster scanning manner.
Optionally, if the number of times of the iterative computation is an odd number, when the pixel points are scanned in the raster scanning manner, the pixel points are sequentially scanned from a first pixel point at the upper left corner to a last pixel point at the lower right corner of the image in a forward scanning manner;
and if the iterative computation times are even numbers, scanning the pixel points from the last pixel point at the lower right corner of the image to the first pixel point at the upper left corner in sequence according to a reverse scanning mode when scanning the pixel points in the raster scanning mode.
Optionally, the normalization of the significant characteristic values of the pixel points is implemented by the following method:
and normalizing the significance characteristic value of the pixel point to the gray value of the pixel point.
Optionally, the normalization is performed on the significant feature value by binarization segmentation, and the following method is adopted to implement:
judging whether the gray value of the pixel point is larger than a gray threshold value after normalization, if so, the pixel point is a significant pixel point of the image; and if not, the pixel points are non-significant pixel points of the image.
Optionally, the segmenting of the pixel feature block of the image is implemented by adopting the following method:
and segmenting the pixel characteristic blocks of the image by adopting a Mean-shift image segmentation algorithm.
Optionally, the detecting the edge feature of the image is implemented by the following method:
and detecting the edge characteristics of the image by adopting a Canny edge detection operator to obtain the gray value of the edge characteristics of the pixel points of the image.
Optionally, the extracting of the pixel feature blocks which meet the preset threshold from the segmented pixel feature blocks is implemented in the following manner:
and calculating the number of pixel points in the segmented pixel characteristic block, and if the number of the pixel points in the pixel characteristic block exceeds a threshold value of the number of the pixel points, extracting the pixel characteristic block.
Optionally, the extracting of the pixel feature blocks which meet the preset threshold from the segmented pixel feature blocks is implemented in the following manner:
and calculating the area of the segmented pixel feature block, and if the area of the pixel feature block is larger than or equal to an area threshold value, extracting the pixel feature block.
Optionally, the edge feature and the salient feature are compared, and the following method is adopted:
and performing OR operation on the gray value of the edge feature of the pixel point and the saliency feature value according to the gray value of the edge feature of the pixel point of the image obtained by detection and the saliency feature value of the pixel point determined by analysis, and determining the saliency region of the image according to the gray value of the pixel point of the image obtained by calculation.
Another embodiment of an electronic device provided by the present application is as follows:
in the above embodiment, another image saliency detection method is provided, and in addition, the present application also provides an electronic device for implementing the another image saliency detection method, which is described below with reference to the accompanying drawings.
Referring to fig. 12, a schematic diagram of another electronic device provided by the present embodiment is shown.
The embodiment of the electronic device provided by the application is relatively simple to describe, and for relevant portions, reference may be made to the corresponding description of the another embodiment of the image saliency detection method provided by the application. The embodiments described below are merely illustrative.
The present application provides another electronic device, comprising:
a memory 12-01, and a processor 12-02;
the memory 12-01 is configured to store computer-executable instructions, and the processor 12-02 is configured to execute the computer-executable instructions to:
detecting edge features of the image;
identifying pixel feature blocks in the image which meet a preset threshold;
determining salient features of the image based on the identified blocks of pixel features;
and comparing the edge features with the salient features to determine salient regions of the images.
Optionally, the determining the salient features of the image based on the identified pixel feature block is implemented by:
aiming at least one pixel point of the image, the following operations are executed: calculating the significance characteristic value of the pixel point; normalizing the significant characteristic values of the pixel points; and carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the determining the salient features of the image based on the identified pixel feature block is implemented by:
calculating the significance characteristic value of the pixel point of the image;
and carrying out binarization segmentation on the significant characteristic values of the pixel points, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the saliency feature value of the pixel point within the identified pixel feature block is a lower limit value of the saliency feature value.
Optionally, the initial value of the saliency characteristic value of the image boundary pixel point is a lower limit value of the saliency characteristic value.
Optionally, except for the image boundary pixel points, the initial values of the saliency characteristic values of the other pixel points are the upper limit values of the saliency characteristic values.
Optionally, the saliency characteristic value of the pixel point is updated by iterative computation based on an initial value of the saliency characteristic value of the pixel point, and the saliency characteristic value of the pixel point is determined according to an iterative computation result;
if the iterative computation times are odd, updating the significance characteristic values of the pixel points according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel points;
and if the iterative computation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
Optionally, in the iterative computation process of the saliency characteristic values of the pixels, the pixels are scanned in a raster scanning manner, and the saliency characteristic values of the pixels are sequentially iteratively computed according to the sequence of scanning the pixels in the raster scanning manner.
Optionally, if the number of times of the iterative computation is an odd number, when the pixel points are scanned in the raster scanning manner, the pixel points are sequentially scanned from a first pixel point at the upper left corner to a last pixel point at the lower right corner of the image in a forward scanning manner;
and if the iterative computation times are even numbers, scanning the pixel points from the last pixel point at the lower right corner of the image to the first pixel point at the upper left corner in sequence according to a reverse scanning mode when scanning the pixel points in the raster scanning mode.
Optionally, the normalization of the significant characteristic values of the pixel points is implemented by the following method:
and normalizing the significance characteristic value of the pixel point to the gray value of the pixel point.
Optionally, the normalization is performed on the significant feature value by binarization segmentation, and the following method is adopted to implement:
judging whether the gray value of the pixel point is larger than a gray threshold value after normalization, if so, the pixel point is a significant pixel point of the image; and if not, the pixel points are non-significant pixel points of the image.
Optionally, the detecting the edge feature of the image is implemented by the following method:
and detecting the edge characteristics of the image by adopting a Canny edge detection operator to obtain the gray value of the edge characteristics of the pixel points of the image.
Optionally, the edge feature and the salient feature are compared, and the following method is adopted:
and performing OR operation on the gray value of the edge feature of the pixel point and the saliency feature value according to the gray value of the edge feature of the pixel point of the image obtained by detection and the saliency feature value of the pixel point determined by analysis, and determining the saliency region of the image according to the gray value of the pixel point of the image obtained by calculation.
The embodiment of a computer readable medium provided by the application is as follows:
in the above embodiments, an image saliency detection method is provided, and furthermore, a computer readable medium is provided, on which instructions are stored, and when executed, the instructions perform the image saliency detection method provided by the present application.
The present application provides a computer-readable medium having instructions stored thereon that are executable to:
segmenting a pixel feature block of an image;
detecting an edge feature of the image;
extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks;
determining salient features of the image based on the extracted pixel feature block;
and comparing the edge features with the salient features to determine salient regions of the images.
Optionally, the determining the saliency feature of the image based on the extracted pixel feature block is implemented by:
aiming at least one pixel point of the image, the following operations are executed:
calculating the significance characteristic value of the pixel point;
normalizing the significant characteristic values of the pixel points;
and carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the determining the saliency feature of the image based on the extracted pixel feature block is implemented by:
calculating the significance characteristic value of the pixel point of the image;
and carrying out binarization segmentation on the significant characteristic values of the pixel points, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the saliency characteristic value of the pixel point in the extracted pixel characteristic block is a lower limit value of the saliency characteristic value.
Optionally, the initial value of the saliency characteristic value of the image boundary pixel point is a lower limit value of the saliency characteristic value.
Optionally, except for the image boundary pixel points, the initial values of the saliency characteristic values of the other pixel points are the upper limit values of the saliency characteristic values.
Optionally, the saliency characteristic value of the pixel point is updated by iterative computation based on an initial value of the saliency characteristic value of the pixel point, and the saliency characteristic value of the pixel point is determined according to an iterative computation result;
if the iterative computation times are odd, updating the significance characteristic values of the pixel points according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel points;
and if the iterative computation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
Optionally, in the iterative computation process of the saliency characteristic values of the pixels, the pixels are scanned in a raster scanning manner, and the saliency characteristic values of the pixels are sequentially iteratively computed according to the sequence of scanning the pixels in the raster scanning manner.
Optionally, if the number of times of the iterative computation is an odd number, when the pixel points are scanned in the raster scanning manner, the pixel points are sequentially scanned from a first pixel point at the upper left corner to a last pixel point at the lower right corner of the image in a forward scanning manner;
and if the iterative computation times are even numbers, scanning the pixel points from the last pixel point at the lower right corner of the image to the first pixel point at the upper left corner in sequence according to a reverse scanning mode when scanning the pixel points in the raster scanning mode.
Optionally, the normalization of the significant characteristic values of the pixel points is implemented by the following method:
and normalizing the significance characteristic value of the pixel point to the gray value of the pixel point.
Optionally, the normalization is performed on the significant feature value by binarization segmentation, and the following method is adopted to implement:
judging whether the gray value of the pixel point is larger than a gray threshold value after normalization, if so, the pixel point is a significant pixel point of the image; and if not, the pixel points are non-significant pixel points of the image.
Optionally, the segmenting of the pixel feature block of the image is implemented by adopting the following method:
and segmenting the pixel characteristic blocks of the image by adopting a Mean-shift image segmentation algorithm.
Optionally, the detecting the edge feature of the image is implemented by the following method:
and detecting the edge characteristics of the image by adopting a Canny edge detection operator to obtain the gray value of the edge characteristics of the pixel points of the image.
Optionally, the extracting of the pixel feature blocks which meet the preset threshold from the segmented pixel feature blocks is implemented in the following manner:
and calculating the number of pixel points in the segmented pixel characteristic block, and if the number of the pixel points in the pixel characteristic block exceeds a threshold value of the number of the pixel points, extracting the pixel characteristic block.
Optionally, the extracting of the pixel feature blocks which meet the preset threshold from the segmented pixel feature blocks is implemented in the following manner:
and calculating the area of the segmented pixel feature block, and if the area of the pixel feature block is larger than or equal to an area threshold value, extracting the pixel feature block.
Optionally, the edge feature and the salient feature are compared, and the following method is adopted:
and performing OR operation on the gray value of the edge feature of the pixel point and the saliency feature value according to the gray value of the edge feature of the pixel point of the image obtained by detection and the saliency feature value of the pixel point determined by analysis, and determining the saliency region of the image according to the gray value of the pixel point of the image obtained by calculation.
Another embodiment of a computer-readable medium provided by the present application is as follows:
in the above embodiment, another image saliency detection method is provided, and furthermore, a computer readable medium is provided, on which instructions are stored, and when executed, the instructions perform the above another image saliency detection method provided by the present application.
The present application provides another computer-readable medium having instructions stored thereon that are executable to:
detecting edge features of the image;
identifying pixel feature blocks in the image which meet a preset threshold;
determining salient features of the image based on the identified blocks of pixel features;
and comparing the edge features with the salient features to determine salient regions of the images.
Optionally, the determining the salient features of the image based on the identified pixel feature block is implemented by:
aiming at least one pixel point of the image, the following operations are executed: calculating the significance characteristic value of the pixel point; normalizing the significant characteristic values of the pixel points; and carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the determining the salient features of the image based on the identified pixel feature block is implemented by:
calculating the significance characteristic value of the pixel point of the image;
and carrying out binarization segmentation on the significant characteristic values of the pixel points, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
Optionally, the saliency feature value of the pixel point within the identified pixel feature block is a lower limit value of the saliency feature value.
Optionally, the initial value of the saliency characteristic value of the image boundary pixel point is a lower limit value of the saliency characteristic value.
Optionally, except for the image boundary pixel points, the initial values of the saliency characteristic values of the other pixel points are the upper limit values of the saliency characteristic values.
Optionally, the saliency characteristic value of the pixel point is updated by iterative computation based on an initial value of the saliency characteristic value of the pixel point, and the saliency characteristic value of the pixel point is determined according to an iterative computation result;
if the iterative computation times are odd, updating the significance characteristic values of the pixel points according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel points;
and if the iterative computation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
Optionally, in the iterative computation process of the saliency characteristic values of the pixels, the pixels are scanned in a raster scanning manner, and the saliency characteristic values of the pixels are sequentially iteratively computed according to the sequence of scanning the pixels in the raster scanning manner.
Optionally, if the number of times of the iterative computation is an odd number, when the pixel points are scanned in the raster scanning manner, the pixel points are sequentially scanned from a first pixel point at the upper left corner to a last pixel point at the lower right corner of the image in a forward scanning manner;
and if the iterative computation times are even numbers, scanning the pixel points from the last pixel point at the lower right corner of the image to the first pixel point at the upper left corner in sequence according to a reverse scanning mode when scanning the pixel points in the raster scanning mode.
Optionally, the normalization of the significant characteristic values of the pixel points is implemented by the following method:
and normalizing the significance characteristic value of the pixel point to the gray value of the pixel point.
Optionally, the normalization is performed on the significant feature value by binarization segmentation, and the following method is adopted to implement:
judging whether the gray value of the pixel point is larger than a gray threshold value after normalization, if so, the pixel point is a significant pixel point of the image; and if not, the pixel points are non-significant pixel points of the image.
Optionally, the detecting the edge feature of the image is implemented by the following method:
and detecting the edge characteristics of the image by adopting a Canny edge detection operator to obtain the gray value of the edge characteristics of the pixel points of the image.
Optionally, the edge feature and the salient feature are compared, and the following method is adopted:
and performing OR operation on the gray value of the edge feature of the pixel point and the saliency feature value according to the gray value of the edge feature of the pixel point of the image obtained by detection and the saliency feature value of the pixel point determined by analysis, and determining the saliency region of the image according to the gray value of the pixel point of the image obtained by calculation.
Although the present application has been described with reference to the preferred embodiments, it is not intended to limit the present application, and those skilled in the art can make variations and modifications without departing from the spirit and scope of the present application, therefore, the scope of the present application should be determined by the claims that follow.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.

Claims (25)

1. An image saliency detection method characterized by comprising:
segmenting a pixel feature block of an image;
detecting an edge feature of the image;
extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks;
determining salient features of the image based on the extracted pixel feature block;
and comparing the edge features with the salient features to determine salient regions of the images.
2. The image saliency detection method according to claim 1, characterized in that said determining saliency features of said image based on said extracted blocks of pixel features is carried out as follows:
aiming at least one pixel point of the image, the following operations are executed:
calculating the significance characteristic value of the pixel point;
normalizing the significant characteristic values of the pixel points;
and carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
3. The image saliency detection method according to claim 2, characterized in that said determining saliency features of said image based on said extracted blocks of pixel features is carried out as follows:
calculating the significance characteristic value of the pixel point of the image;
and carrying out binarization segmentation on the significant characteristic values of the pixel points, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
4. The image saliency detection method according to claim 2 or 3, characterized in that a saliency feature value of a pixel point within the extracted pixel feature block is a lower limit value of the saliency feature value.
5. The image saliency detection method according to claim 4, characterized in that an initial value of a saliency feature value of the image boundary pixel point is a lower limit value of the saliency feature value.
6. The image saliency detection method according to claim 5, characterized in that an initial value of the saliency feature values of the remaining pixels except for the image boundary pixels is an upper limit value of the saliency feature value.
7. The image saliency detection method according to claim 6, characterized in that the saliency feature value of the pixel is updated by iterative computation based on an initial value of the saliency feature value of the pixel, and the saliency feature value of the pixel is determined according to an iterative computation result;
if the iterative computation times are odd, updating the significance characteristic values of the pixel points according to the significance characteristic values of the left adjacent pixel point and the upper adjacent pixel point of the pixel points;
and if the iterative computation times are even numbers, updating the significance characteristic values of the pixel points according to the significance characteristic values of the adjacent pixel points on the right side and the adjacent pixel points on the lower side of the pixel points.
8. The image saliency detection method according to claim 7, characterized in that in the process of iterative calculation of the saliency characteristic values of the pixels, the pixels are scanned in a raster scanning manner, and the iterative calculation of the saliency characteristic values of the pixels is performed in sequence according to the order in which the pixels are scanned in the raster scanning manner.
9. The image saliency detection method according to claim 8, characterized in that if the number of times of said iterative computations is odd, when said pixels are scanned in said raster scanning manner, the pixels are sequentially scanned from a first pixel at an upper left corner to a last pixel at a lower right corner of said image in a positive-order scanning manner;
and if the iterative computation times are even numbers, scanning the pixel points from the last pixel point at the lower right corner of the image to the first pixel point at the upper left corner in sequence according to a reverse scanning mode when scanning the pixel points in the raster scanning mode.
10. The image saliency detection method according to claim 2, characterized in that said normalizing the saliency characteristic values of said pixels is implemented as follows:
and normalizing the significance characteristic value of the pixel point to the gray value of the pixel point.
11. The image saliency detection method according to claim 10, characterized in that the binarization segmentation of the normalized saliency feature values is implemented as follows:
judging whether the gray value of the pixel point is larger than a gray threshold value after normalization, if so, the pixel point is a significant pixel point of the image; and if not, the pixel points are non-significant pixel points of the image.
12. The image saliency detection method according to claim 1, characterized in that the segmentation of the pixel feature blocks of an image is implemented as follows:
and segmenting the pixel characteristic blocks of the image by adopting a Mean-shift image segmentation algorithm.
13. The image saliency detection method according to claim 1, characterized in that said detecting edge features of said image is implemented as follows:
and detecting the edge characteristics of the image by adopting a Canny edge detection operator to obtain the gray value of the edge characteristics of the pixel points of the image.
14. The image saliency detection method according to claim 1, characterized in that the extraction of pixel feature blocks satisfying a preset threshold from among the segmented pixel feature blocks is implemented as follows:
and calculating the number of pixel points in the segmented pixel characteristic block, and if the number of the pixel points in the pixel characteristic block exceeds a threshold value of the number of the pixel points, extracting the pixel characteristic block.
15. The image saliency detection method according to claim 1, characterized in that the extraction of pixel feature blocks satisfying a preset threshold from among the segmented pixel feature blocks is implemented as follows:
and calculating the area of the segmented pixel feature block, and if the area of the pixel feature block is larger than or equal to an area threshold value, extracting the pixel feature block.
16. The image saliency detection method according to claim 1, characterized in that said comparing of the edge features with the saliency features is implemented as follows:
and performing OR operation on the gray value of the edge feature of the pixel point and the saliency feature value according to the gray value of the edge feature of the pixel point of the image obtained by detection and the saliency feature value of the pixel point determined by analysis, and determining the saliency region of the image according to the gray value of the pixel point of the image obtained by calculation.
17. An image saliency detection apparatus characterized by comprising:
the image segmentation unit is used for segmenting the pixel feature block of the image;
an edge detection unit for detecting an edge feature of the image;
a pixel feature block extraction unit, configured to extract a pixel feature block that meets a preset threshold from among the segmented pixel feature blocks;
a saliency feature determination unit for determining a saliency feature of the image based on the extracted pixel feature block;
and the salient region determining unit is used for comparing the edge features with the salient features to determine the salient regions of the images.
18. An image saliency detection method characterized by comprising:
detecting edge features of the image;
identifying pixel feature blocks in the image which meet a preset threshold;
determining salient features of the image based on the identified blocks of pixel features;
and comparing the edge features with the salient features to determine salient regions of the images.
19. The image saliency detection method of claim 18, characterized in that said determining saliency features of said images based on said identified blocks of pixel features is carried out as follows:
aiming at least one pixel point of the image, the following operations are executed:
calculating the significance characteristic value of the pixel point;
normalizing the significant characteristic values of the pixel points;
and carrying out binarization segmentation on the normalized significant characteristic value, and determining significant pixel points and non-significant pixel points of the image according to a segmentation result.
20. The image saliency detection method of claim 18, characterized in that said comparing of said edge features with said saliency features is carried out by:
and performing OR operation on the gray value of the edge feature of the pixel point and the saliency feature value according to the gray value of the edge feature of the pixel point of the image obtained by detection and the saliency feature value of the pixel point determined by analysis, and determining the saliency region of the image according to the gray value of the pixel point of the image obtained by calculation.
21. An image saliency detection apparatus characterized by comprising:
an edge detection unit for detecting an edge feature of the image;
the pixel feature block identification unit is used for identifying a pixel feature block which meets a preset threshold value in the image;
a saliency feature determination unit for determining a saliency feature of the image based on the identified pixel feature block;
and the salient region determining unit is used for comparing the edge features with the salient features to determine the salient regions of the images.
22. An electronic device, comprising:
a memory, and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
segmenting a pixel feature block of an image;
detecting an edge feature of the image;
extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks;
determining salient features of the image based on the extracted pixel feature block;
and comparing the edge features with the salient features to determine salient regions of the images.
23. An electronic device, comprising:
a memory, and a processor;
the memory is to store computer-executable instructions, and the processor is to execute the computer-executable instructions to:
detecting edge features of the image;
identifying pixel feature blocks in the image which meet a preset threshold;
determining salient features of the image based on the identified blocks of pixel features;
and comparing the edge features with the salient features to determine salient regions of the images.
24. A computer-readable medium having instructions stored thereon that are executable to:
segmenting a pixel feature block of an image;
detecting an edge feature of the image;
extracting pixel feature blocks which meet a preset threshold value from the segmented pixel feature blocks;
determining salient features of the image based on the extracted pixel feature block;
and comparing the edge features with the salient features to determine salient regions of the images.
25. A computer-readable medium having instructions stored thereon that are executable to:
detecting edge features of the image;
identifying pixel feature blocks in the image which meet a preset threshold;
determining salient features of the image based on the identified blocks of pixel features;
and comparing the edge features with the salient features to determine salient regions of the images.
CN201710363563.0A 2017-05-22 2017-05-22 Image significance detection method and device and electronic equipment Active CN108960247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710363563.0A CN108960247B (en) 2017-05-22 2017-05-22 Image significance detection method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710363563.0A CN108960247B (en) 2017-05-22 2017-05-22 Image significance detection method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN108960247A CN108960247A (en) 2018-12-07
CN108960247B true CN108960247B (en) 2022-02-25

Family

ID=64463051

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710363563.0A Active CN108960247B (en) 2017-05-22 2017-05-22 Image significance detection method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN108960247B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109814551A (en) * 2019-01-04 2019-05-28 丰疆智慧农业股份有限公司 Cereal handles automated driving system, automatic Pilot method and automatic identifying method
CN110008969B (en) * 2019-04-15 2021-05-14 京东方科技集团股份有限公司 Method and device for detecting image saliency region
CN110264545A (en) * 2019-06-19 2019-09-20 北京字节跳动网络技术有限公司 Picture Generation Method, device, electronic equipment and storage medium
CN111681256B (en) * 2020-05-07 2023-08-18 浙江大华技术股份有限公司 Image edge detection method, image edge detection device, computer equipment and readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7596242B2 (en) * 1995-06-07 2009-09-29 Automotive Technologies International, Inc. Image processing for vehicular applications
CN102938054A (en) * 2012-09-06 2013-02-20 北京工业大学 Method for recognizing compressed-domain sensitive images based on visual attention models
CN102968782A (en) * 2012-09-12 2013-03-13 苏州大学 Automatic digging method for remarkable objects of color images
CN103996189A (en) * 2014-05-05 2014-08-20 小米科技有限责任公司 Image segmentation method and device
CN104217438A (en) * 2014-09-19 2014-12-17 西安电子科技大学 Image significance detection method based on semi-supervision

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101533512B (en) * 2009-04-24 2012-05-09 西安电子科技大学 Method for automatically extracting interesting image regions based on human visual attention system
AU2011253982B9 (en) * 2011-12-12 2015-07-16 Canon Kabushiki Kaisha Method, system and apparatus for determining a subject and a distractor in an image
US20140003711A1 (en) * 2012-06-29 2014-01-02 Hong Kong Applied Science And Technology Research Institute Co. Ltd. Foreground extraction and depth initialization for multi-view baseline images
CN103996209B (en) * 2014-05-21 2017-01-11 北京航空航天大学 Infrared vessel object segmentation method based on salient region detection
WO2016207875A1 (en) * 2015-06-22 2016-12-29 Photomyne Ltd. System and method for detecting objects in an image
CN105913082B (en) * 2016-04-08 2020-11-27 北京邦视科技有限公司 Method and system for classifying targets in image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7596242B2 (en) * 1995-06-07 2009-09-29 Automotive Technologies International, Inc. Image processing for vehicular applications
CN102938054A (en) * 2012-09-06 2013-02-20 北京工业大学 Method for recognizing compressed-domain sensitive images based on visual attention models
CN102968782A (en) * 2012-09-12 2013-03-13 苏州大学 Automatic digging method for remarkable objects of color images
CN103996189A (en) * 2014-05-05 2014-08-20 小米科技有限责任公司 Image segmentation method and device
CN104217438A (en) * 2014-09-19 2014-12-17 西安电子科技大学 Image significance detection method based on semi-supervision

Also Published As

Publication number Publication date
CN108960247A (en) 2018-12-07

Similar Documents

Publication Publication Date Title
CN110414507B (en) License plate recognition method and device, computer equipment and storage medium
CN110046529B (en) Two-dimensional code identification method, device and equipment
EP2783328B1 (en) Text detection using multi-layer connected components with histograms
CN105184763B (en) Image processing method and device
US9294665B2 (en) Feature extraction apparatus, feature extraction program, and image processing apparatus
US10679358B2 (en) Learning image automatic sorting device, learning image automatic sorting method, and learning image automatic sorting program
CN108960247B (en) Image significance detection method and device and electronic equipment
US20140037159A1 (en) Apparatus and method for analyzing lesions in medical image
KR101988384B1 (en) Image matching apparatus, image matching system and image matching mehod
US9576210B1 (en) Sharpness-based frame selection for OCR
US10169673B2 (en) Region-of-interest detection apparatus, region-of-interest detection method, and recording medium
KR101997048B1 (en) Method for recognizing distant multiple codes for logistics management and code recognizing apparatus using the same
US20090103814A1 (en) Information Processing Apparatus, Information Processing Method, and Program
AU2020272936B2 (en) Methods and systems for crack detection using a fully convolutional network
CN110570442A (en) Contour detection method under complex background, terminal device and storage medium
US20170309024A1 (en) Image processing device, image processing method, and computer-readable recording medium
CN111784675A (en) Method and device for processing article texture information, storage medium and electronic equipment
CN113591719A (en) Method and device for detecting text with any shape in natural scene and training method
WO2024016632A1 (en) Bright spot location method, bright spot location apparatus, electronic device and storage medium
CN116798041A (en) Image recognition method and device and electronic equipment
KR102452511B1 (en) Method and apparatus for detection element image in drawings
JP2016081472A (en) Image processing device, and image processing method and program
CN114648751A (en) Method, device, terminal and storage medium for processing video subtitles
CN115019069A (en) Template matching method, template matching device and storage medium
MAARIR et al. Building detection from satellite images based on curvature scale space method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant