CN114078138A - Image significance detection method and device - Google Patents

Image significance detection method and device Download PDF

Info

Publication number
CN114078138A
CN114078138A CN202111397564.XA CN202111397564A CN114078138A CN 114078138 A CN114078138 A CN 114078138A CN 202111397564 A CN202111397564 A CN 202111397564A CN 114078138 A CN114078138 A CN 114078138A
Authority
CN
China
Prior art keywords
contrast
saliency
image
significance
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111397564.XA
Other languages
Chinese (zh)
Inventor
张辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bank of China Ltd
Original Assignee
Bank of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bank of China Ltd filed Critical Bank of China Ltd
Priority to CN202111397564.XA priority Critical patent/CN114078138A/en
Publication of CN114078138A publication Critical patent/CN114078138A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Abstract

The invention provides an image significance detection method and device, which can be used in the technical field of artificial intelligence, and the method comprises the following steps: acquiring a target image; generating four antagonistic color channels of the target image; finding out a confrontation color channel with the highest significant contrast from the four confrontation color channels; obtaining a significance map based on the contrast channel with the highest significance contrast; and segmenting the saliency map to obtain a salient object in the target image. The invention can realize the significance detection of the image, and has simple method and good effect.

Description

Image significance detection method and device
Technical Field
The invention relates to the technical field of artificial intelligence, in particular to an image saliency detection method and device.
Background
How to rapidly and efficiently detect the salient region in the image, the research result has great value in automatic processing of bills in a bank, face detection, target tracking and recognition, LOGO implantation in a non-salient region and the like, the salient region in the image is detected, and the image or video can be effectively preprocessed.
The method for detecting the salient region of the image, which is provided by scholars at home and abroad in recent years, has a center-periphery difference mechanism based on biological vision, a model based on a full probability map, a characteristic difference model based on a sliding window and an image block, and generally has the characteristics of high calculation complexity and slow response.
Therefore, a method for detecting the significance of an image is lacking at present.
Disclosure of Invention
The embodiment of the invention provides an image significance detection method, which is used for realizing the significance detection of an image, is simple and has good effect, and comprises the following steps:
acquiring a target image;
generating four antagonistic color channels of the target image;
finding out a confrontation color channel with the highest significant contrast from the four confrontation color channels;
obtaining a significance map based on the contrast channel with the highest significance contrast;
and segmenting the saliency map to obtain a salient object in the target image.
The embodiment of the invention provides an image saliency detection device, which is used for realizing saliency detection of an image, has a simple method and a good effect, and comprises the following steps:
the target image acquisition module is used for acquiring a target image;
the contrast color channel generation module is used for generating four contrast color channels of the target image;
the screening module is used for finding out the contrast color channel with the highest obvious contrast from the four contrast color channels;
the saliency map obtaining module is used for obtaining a saliency map based on the contrast channel with the highest saliency contrast;
and the salient object obtaining module is used for segmenting the saliency map to obtain salient objects in the target image.
The embodiment of the invention also provides computer equipment which comprises a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the processor executes the computer program to realize the automatic login method of the cross-border matching system.
The embodiment of the invention also provides a computer readable storage medium, wherein a computer program is stored in the computer readable storage medium, and when the computer program is executed by a processor, the computer program realizes the automatic login method of the cross-border matching system.
An embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program, and when executed by a processor, the computer program implements the automatic login method of the cross-border matching system.
In the embodiment of the invention, a target image is obtained; generating four antagonistic color channels of the target image; finding out a confrontation color channel with the highest significant contrast from the four confrontation color channels; obtaining a significance map based on the contrast channel with the highest significance contrast; and segmenting the saliency map to obtain a salient object in the target image. In the process, four color-resisting channels are generated, compared with the prior significance level diagram represented by three channels, the effect of detecting the significant object can be obviously improved, and the whole method is simple and easy to operate.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts. In the drawings:
FIG. 1 is a flowchart of an image saliency detection method according to an embodiment of the present invention;
FIG. 2 is a diagram of an embodiment of the present invention, and FIG. 2 is a diagram of an image saliency detection result in an embodiment of the present invention;
fig. 3 is a schematic diagram of results of a detection accuracy P and a recall Q of a salient object obtained by different methods in the embodiment of the present invention;
FIG. 4 is a schematic diagram of segmentation results of salient objects obtained by different methods according to an embodiment of the present invention;
FIG. 5 shows the accuracy P, recall Q, and F obtained by using adaptive thresholds according to an embodiment of the present inventionβComparing the results of (1) with a schematic diagram;
FIG. 6 is a diagram illustrating an image saliency detection apparatus according to an embodiment of the present invention;
FIG. 7 is a diagram of a computer device in an embodiment of the invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the embodiments of the present invention are further described in detail below with reference to the accompanying drawings. The exemplary embodiments and descriptions of the present invention are provided to explain the present invention, but not to limit the present invention.
In the description of the present specification, the terms "comprising," "including," "having," "containing," and the like are used in an open-ended fashion, i.e., to mean including, but not limited to. Reference to the description of the terms "one embodiment," "a particular embodiment," "some embodiments," "for example," etc., means that a particular feature, structure, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the application. In this specification, the schematic representations of the terms used above do not necessarily refer to the same embodiment or example. Furthermore, the particular features, structures, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. The sequence of steps involved in the embodiments is for illustrative purposes to illustrate the implementation of the present application, and the sequence of steps is not limited and can be adjusted as needed.
First, terms involved in the embodiments of the present invention are explained.
Image saliency: the image significance is combined with computer vision, biological neurology and psychology are used as supports, uncertainty and relative subjectivity of research results exist, a biological vision mechanism and machine vision are fused in the visual significance analysis, and areas which obviously protrude from the average value and can attract the attention of human eyes are significant areas.
Color fastness: the color contrast channel can reserve areas with larger gray values in the image, and the areas correspond to the original image and can easily attract the attention of human eyes.
The inventors have found that neurons, in a preliminary stage, can handle simple visual features such as grey scale contrast, gradient direction, object movement, etc., which can cause different responses in the human eye. Wherein the color components have a large impact on detecting salient objects in the scene. Human eye vision the visual cells of the human eye contain a large number of rods and cones which are sensitive to three different rays (R, G, B). The Human Visual System (HVS) is not able to process all the information entering the human eye equally, and the Color Contrast (CC) has a very important impact on the detection of salient regions in images. Osberger teaches that it is more attractive to the human eye when the color of one region is significantly different from the color of the surrounding region. Based on these theories, the embodiment of the invention provides a color-contrast theory for image significance detection. In the method provided by the embodiment of the invention, starting from the basic definition of significance, human eyes always pay attention to a region with obvious gray contrast with a background in an image firstly, a global detection method, namely a salient object detection model based on a contrast color channel, is provided, the model firstly generates four contrast color channels, then adaptively selects one contrast color channel according to a visual criterion, and calculates an image region with obvious color difference from a mean value, namely the salient region in the image, and the method is based on the global color characteristics and takes the basic definition of significance as a principle, and is simple and efficient.
Fig. 1 is a flowchart of an image saliency detection method according to an embodiment of the present invention, as shown in fig. 1, including:
step 101, acquiring a target image;
102, generating four color-resisting channels of a target image;
103, finding out a confrontation color channel with the highest obvious contrast from the four confrontation color channels;
104, obtaining a saliency map based on the contrast channel with the highest saliency contrast;
and 105, segmenting the saliency map to obtain a salient object in the target image.
In the embodiment of the invention, four color-resisting channels are generated, compared with the prior significance map represented by three channels, the effect of detecting the significant object can be obviously improved, and the integral method is simple and easy to operate.
In step 101, a target image is acquired.
The target image may be any image such as a bill, a video shot image, etc.
In step 102, four antagonistic color channels of the target image are generated.
In one embodiment, four antagonistic color channels of a target image are generated, comprising:
obtaining R, G, B gray scale values of the target image;
four antagonistic color channels are generated according to R, G, B gray values of the target image.
In the above embodiment, the four antagonistic color channels can be represented as:
Figure BDA0003370528250000041
Figure BDA0003370528250000042
Figure BDA0003370528250000043
Figure BDA0003370528250000044
wherein the content of the first and second substances,
Figure BDA0003370528250000045
four antagonistic color channels.
In step 103, the dominant contrast channel is found out of the four dominant color channels.
Because biological visual systems are always more sensitive to visual signals with significant contrast, biological visual systems also have a visual trend towards significant visual information from different color channels. Accordingly, in the embodiment of the present invention, a channel having a strong color contrast is first selected, and then, a channel having the largest amount of information is determined as a counter color channel having the highest significant contrast.
In one embodiment, finding the dominant contrast channel with the highest significant contrast from the four contrast channels comprises:
calculating the color contrast of each antagonistic color channel according to the gray value of each antagonistic color channel;
calculating the information amount of each antagonistic color channel based on the color contrast of each antagonistic color channel;
and determining the confrontation color channel with the largest information amount as the confrontation color channel with the highest remarkable contrast.
The above is an adaptive method in which the variance σ is usedkRepresents the color contrast of each antagonistic color channel:
Figure BDA0003370528250000051
wherein the content of the first and second substances,
Figure BDA0003370528250000052
denotes the color mean of the contrast channel k, and W and H denote the width and height of the target image.
The information amount of each antagonistic color channel is expressed by the following formula:
c=arg maxkk}
in step 104, a saliency map is obtained based on the contrast channel with the highest saliency contrast.
In one embodiment, obtaining a saliency map based on the contrast channel with the highest saliency contrast includes:
determining the first significance of each pixel point according to the pixel gray value of each pixel point in the contrast color channel with the highest significant contrast;
carrying out gray stretching on the first significance of each pixel point to obtain a gray stretching value of each pixel point;
and calculating the second significance of each pixel point according to the gray stretching value of each pixel point, wherein the second significance of all the pixel points of the image corresponding to the contrast channel with the highest significant contrast constitutes a significance map.
In the above embodiment, in the selected color-fighting channel c, the pixel point value and the color mean value
Figure BDA0003370528250000053
Pixels with distinct differences are from the salient region, and pixel values are assumed to be AND
Figure BDA0003370528250000054
The pixel point values with similar values are from the background area, and therefore, in an embodiment, determining the first saliency of each pixel point according to the pixel gray value of each pixel point in the contrast color channel with the highest saliency contrast includes:
calculating the pixel gray value of each pixel point in the contrast channel with the highest obvious contrast and the channel pixel mean value;
and determining the first significance of each pixel point according to the pixel gray value of each pixel point and the channel pixel mean value.
The first saliency of pixel point (i, j) may be expressed as:
Figure BDA0003370528250000055
wherein s (i, j) is the first significance of the pixel point (i, j);
Figure BDA0003370528250000056
represents the color mean of the challenge color channel C; i iscAnd (i, j) is the pixel gray value of the pixel point (i, j).
The gray stretching is performed to express the value of the first saliency within the range of [0,255], and therefore, the gray stretching is performed on the first saliency of each pixel point by using the following formula:
Figure BDA0003370528250000061
wherein s' (i, j) is the gray stretching value of the pixel point (i, j), smin,smaxRespectively representing the minimum value and the maximum value of the gray value of the pixel in the image corresponding to the confrontation color channel C, and performing gray stretching on the image corresponding to the confrontation color channel C to ensure that the gray value of all the pixels in the image corresponding to the confrontation color channel C is [0,255]]Have a distribution within the range of (1).
In an embodiment, calculating the second saliency of each pixel point according to the gray scale stretching value of each pixel point includes:
calculating a space attention function value of each pixel point;
and calculating the second significance of each pixel point according to the gray stretching value and the space attention function value of each pixel point.
It should be noted that the central convex structure of the human eye directs the human eye to pay attention to the object in the front central range in a scene, and in addition, the human eye always pays attention to the central area of the image and ignores the peripheral area in the image, therefore, the embodiment of the present invention defines a spatial attention function f (i, j) to represent the "center-periphery" effect of the human eye looking at the object:
Figure BDA0003370528250000062
wherein d (i, j) represents the distance from the pixel point (i, j) to the center of the image corresponding to the color channel C, and L represents half of the diagonal length of the image. And f (i, j) is epsilon [1/2,1], and the space distance function reflects the center-periphery decreasing rule of human eye attention.
Therefore, the second saliency of each pixel point is calculated by adopting the following formula:
S(i,j)=s'(i,j)×f(i,j)
wherein, S (i, j) is the second significance of the pixel point (i, j).
In step 105, the saliency map is segmented to obtain salient objects in the target image.
The salient object is generally an object or number of interest to the human eye.
In one embodiment, segmenting the saliency map to obtain a salient object in the target image includes:
dividing the significance map to obtain a plurality of blocks;
for each block, calculating the average value of the second significance of all the pixel points of the block, and if the average value is greater than a segmentation threshold, determining that the pixel points in the block are significant pixel points;
and determining that all the significant pixel points in the significance map constitute significant objects.
In one embodiment, segmenting the saliency map, obtaining a plurality of blocks, comprises:
and (4) segmenting the saliency map by adopting a Mean-shift algorithm to obtain a plurality of blocks. The Mean-shift algorithm is widely applied to the aspects of mode detection, clustering, image segmentation, target tracking and the like, and in addition, the Mean-shift algorithm has the characteristics of simple calculation, high convergence speed, strong robustness on noise and the like, and is a high-efficiency non-parametric iterative algorithm.
It should be noted that the segmentation threshold may be a fixed value, a value obtained within a range of [0,255], or a dynamic value, and in an embodiment, the segmentation threshold is calculated by using the following formula:
Figure BDA0003370528250000071
wherein, TaIs a segmentation threshold; w and H are the width and height of the saliency map, respectively; and S (i, j) is the second significance of the pixel point (i, j). T aboveaI.e. dynamic values, also called adaptive thresholds.
The segmentation threshold is related to the width and height of the saliency map, is not a fixed value, and has higher accuracy. An example is given below to illustrate the effect of the proposed method of the invention.
Experiments were performed on 1000 images provided by r.achanta et al, comparing the method proposed in the present example with six other methods (including IT, MZ, GB, SR, CA, AC), and two sets of comparative experiments were performed.
The first set of experiments are image segmentation experiments under a fixed segmentation threshold, and fig. 2 is an image saliency detection result in an embodiment of the present invention. The segmentation threshold value T is valued in the range of [0,255], and each value of T is taken to obtain a segmentation result after binarization, and then the segmentation result is compared with a group route (binarization result of a manually marked significant area), and the result is evaluated by using an accuracy P and a recall rate Q, wherein the definition P and Q are respectively as follows:
Figure BDA0003370528250000072
Figure BDA0003370528250000073
wherein S represents a segmentation result obtained by binarizing the saliency map. And continuously changing the segmentation threshold value T within the range of [0,255] to obtain a curve of the accuracy vs. the recall rate. As shown in fig. 3: in the curve of the accuracy vs. recall, when the value of T is small, the obtained accuracy value is relatively low, because the significant region of the corresponding image is not accurate when the value is small, and the condition that the value of the threshold T is small can be disregarded, therefore, the method SE provided by the embodiment of the present invention is superior to other six methods.
In the second set of experiments, by adaptive thresholdingSegmenting a significant region in the significance map to obtain a binarized significant object region, and adaptively segmenting the segmented block z by using a Mean-shift method for the imageiCalculating the average saliency of all pixels
Figure BDA0003370528250000074
If it is not
Figure BDA0003370528250000075
In the region ziThe pixels in the image are marked as significant pixel points, the significant degree graph obtained by the IT, MZ, GBSR, CA and AC methods is segmented by the method, and the segmented result is better than that shown in FIG. 4.
Besides using the accuracy vs. recall curve for comparison, the invention also defines the F-Measure method:
Figure BDA0003370528250000081
wherein, FβThe greater the effect, the better the invention, in the practice of this invention, to emphasize the importance of accuracy, β was chosen2For a binarized image after segmentation, the values of accuracy and recall are calculated as 0.3, and F is assigned theretoβAnd the histogram of fig. 5 is obtained, and it can be seen that the method provided by the embodiment of the present invention has better effect than other six methods.
In summary, the method provided by the embodiment of the invention has the following beneficial effects: according to the four contrast color channels, a channel with information content, namely the contrast color channel with the highest significant contrast is selected in a self-adaptive mode, the significant objects in the image are segmented in a self-adaptive mode from the perspective of universalization, the significance map is subjected to pixelation processing by combining a visual mechanism of human eyes and a central-peripheral observation mechanism, and accuracy and efficiency are improved. In the field of image processing, an effective preprocessing method is provided for character recognition, face recognition, object tracking, advertisement processing and the like in images.
The invention also provides an image significance detection device, the principle of which is the same as that of the image significance detection method, and the description is omitted here.
Fig. 6 is a first schematic diagram of an image saliency detection apparatus according to an embodiment of the present invention, as shown in fig. 6, including:
a target image obtaining module 601, configured to obtain a target image;
a confrontation color channel generation module 602, configured to generate four confrontation color channels of the target image;
the screening module 603 is configured to find a dominant contrast channel with the highest significant contrast from the four dominant contrast channels;
a saliency map obtaining module 604, configured to obtain a saliency map based on the contrast channel with the highest saliency contrast;
a salient object obtaining module 605, configured to segment the saliency map to obtain a salient object in the target image.
In an embodiment, the confrontation color channel generating module 602 is specifically configured to:
obtaining R, G, B gray scale values of the target image;
four antagonistic color channels are generated according to R, G, B gray values of the target image.
In an embodiment, the screening module 603 is specifically configured to:
calculating the color contrast of each antagonistic color channel according to the gray value of each antagonistic color channel;
calculating the information amount of each antagonistic color channel based on the color contrast of each antagonistic color channel;
and determining the confrontation color channel with the largest information amount as the confrontation color channel with the highest remarkable contrast.
In an embodiment, the saliency map obtaining module 604 is specifically configured to:
determining the first significance of each pixel point according to the pixel gray value of each pixel point in the contrast color channel with the highest significant contrast;
carrying out gray stretching on the first significance of each pixel point to obtain a gray stretching value of each pixel point;
and calculating the second significance of each pixel point according to the gray stretching value of each pixel point, wherein the second significance of all the pixel points of the image corresponding to the contrast channel with the highest significant contrast constitutes a significance map.
In an embodiment, the saliency map obtaining module 604 is specifically configured to:
calculating the pixel gray value of each pixel point in the contrast channel with the highest obvious contrast and the channel pixel mean value;
and determining the first significance of each pixel point according to the pixel gray value of each pixel point and the channel pixel mean value.
In an embodiment, the saliency map obtaining module 604 is specifically configured to:
calculating a space attention function value of each pixel point;
and calculating the second significance of each pixel point according to the gray stretching value and the space attention function value of each pixel point.
In one embodiment, the salient object obtaining module 605 is specifically configured to:
dividing the significance map to obtain a plurality of blocks;
for each block, calculating the average value of the second significance of all the pixel points of the block, and if the average value is greater than a segmentation threshold, determining that the pixel points in the block are significant pixel points;
and determining that all the significant pixel points in the significance map constitute significant objects.
In one embodiment, the salient object obtaining module 605 is specifically configured to:
and (4) segmenting the saliency map by adopting a Mean-shift algorithm to obtain a plurality of blocks.
In one embodiment, the segmentation threshold is calculated using the following formula:
Figure BDA0003370528250000091
wherein, TaIs a segmentation threshold; w and H are the width and height of the saliency map, respectively; s (i, j)Is the second saliency of pixel point (i, j).
In summary, the apparatus provided in the embodiment of the present invention has the following advantages: according to the four contrast color channels, a channel with information content, namely the contrast color channel with the highest significant contrast is selected in a self-adaptive mode, the significant objects in the image are segmented in a self-adaptive mode from the perspective of universalization, the significance map is subjected to pixelation processing by combining a visual mechanism of human eyes and a central-peripheral observation mechanism, and accuracy and efficiency are improved. In the field of image processing, an effective preprocessing method is provided for character recognition, face recognition, object tracking, advertisement processing and the like in images.
Fig. 7 is a schematic diagram of a computer device according to an embodiment of the present invention, where the computer device 700 includes a memory 710, a processor 720, and a computer program 730 stored in the memory 710 and executable on the processor 720, and the processor 720 implements the above-mentioned messaging method when executing the computer program 730.
An embodiment of the present invention further provides a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the messaging method is implemented.
An embodiment of the present invention further provides a computer program product, where the computer program product includes a computer program, and when the computer program is executed by a processor, the messaging method is implemented.
An embodiment of the present invention further provides a computer-readable storage medium, which can implement all the steps in the messaging in the above embodiments, wherein the computer-readable storage medium stores a computer program, and the computer program, when executed by a processor, implements all the steps in the messaging in the above embodiments.
As will be appreciated by one skilled in the art, embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are only exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (13)

1. An image saliency detection method characterized by comprising:
acquiring a target image;
generating four antagonistic color channels of the target image;
finding out a confrontation color channel with the highest significant contrast from the four confrontation color channels;
obtaining a significance map based on the contrast channel with the highest significance contrast;
and segmenting the saliency map to obtain a salient object in the target image.
2. The image saliency detection method of claim 1 characterized in that generating four opponent color channels of a target image comprises:
obtaining R, G, B gray scale values of the target image;
four antagonistic color channels are generated according to R, G, B gray values of the target image.
3. The image saliency detection method of claim 1 characterized in that finding the dominant color channel with highest saliency contrast from the four dominant color channels comprises:
calculating the color contrast of each antagonistic color channel according to the gray value of each antagonistic color channel;
calculating the information amount of each antagonistic color channel based on the color contrast of each antagonistic color channel;
and determining the confrontation color channel with the largest information amount as the confrontation color channel with the highest remarkable contrast.
4. The image saliency detection method of claim 1, characterized in that obtaining a saliency map based on the contrast channel with the highest saliency contrast comprises:
determining the first significance of each pixel point according to the pixel gray value of each pixel point in the contrast color channel with the highest significant contrast;
carrying out gray stretching on the first significance of each pixel point to obtain a gray stretching value of each pixel point;
and calculating the second significance of each pixel point according to the gray stretching value of each pixel point, wherein the second significance of all the pixel points of the image corresponding to the contrast channel with the highest significant contrast constitutes a significance map.
5. The image saliency detection method of claim 4 characterized in that determining the first saliency of each pixel point from the pixel grey value of each pixel point in the contrast channel with the highest saliency contrast comprises:
calculating the pixel gray value of each pixel point in the contrast channel with the highest obvious contrast and the channel pixel mean value;
and determining the first significance of each pixel point according to the pixel gray value of each pixel point and the channel pixel mean value.
6. The image saliency detection method of claim 4 characterized in that calculating the second saliency of each pixel point according to the gray scale stretch value of each pixel point comprises:
calculating a space attention function value of each pixel point;
and calculating the second significance of each pixel point according to the gray stretching value and the space attention function value of each pixel point.
7. The image saliency detection method of claim 1, characterized in that segmenting the saliency map to obtain salient objects in the target image comprises:
dividing the significance map to obtain a plurality of blocks;
for each block, calculating the average value of the second significance of all the pixel points of the block, and if the average value is greater than a segmentation threshold, determining that the pixel points in the block are significant pixel points;
and determining that all the significant pixel points in the significance map constitute significant objects.
8. The image saliency detection method of claim 7 characterized in that segmenting the saliency map, obtaining a plurality of blocks, comprises:
and (4) segmenting the saliency map by adopting a Mean-shift algorithm to obtain a plurality of blocks.
9. The image saliency detection method of claim 7 characterized in that said segmentation threshold is calculated with the following formula:
Figure FDA0003370528240000021
wherein, TaIs a segmentation threshold; w and H are the width and height of the saliency map, respectively; and S (i, j) is the second significance of the pixel point (i, j).
10. An image saliency detection apparatus characterized by comprising:
the target image acquisition module is used for acquiring a target image;
the contrast color channel generation module is used for generating four contrast color channels of the target image;
the screening module is used for finding out the contrast color channel with the highest obvious contrast from the four contrast color channels;
the saliency map obtaining module is used for obtaining a saliency map based on the contrast channel with the highest saliency contrast;
and the salient object obtaining module is used for segmenting the saliency map to obtain salient objects in the target image.
11. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 9 when executing the computer program.
12. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program which, when executed by a processor, implements the method of any one of claims 1 to 9.
13. A computer program product, characterized in that the computer program product comprises a computer program which, when being executed by a processor, carries out the method of any one of claims 1 to 9.
CN202111397564.XA 2021-11-23 2021-11-23 Image significance detection method and device Pending CN114078138A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111397564.XA CN114078138A (en) 2021-11-23 2021-11-23 Image significance detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111397564.XA CN114078138A (en) 2021-11-23 2021-11-23 Image significance detection method and device

Publications (1)

Publication Number Publication Date
CN114078138A true CN114078138A (en) 2022-02-22

Family

ID=80284043

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111397564.XA Pending CN114078138A (en) 2021-11-23 2021-11-23 Image significance detection method and device

Country Status (1)

Country Link
CN (1) CN114078138A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332489A (en) * 2022-03-15 2022-04-12 江西财经大学 Image salient target detection method and system based on uncertainty perception

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114332489A (en) * 2022-03-15 2022-04-12 江西财经大学 Image salient target detection method and system based on uncertainty perception
CN114332489B (en) * 2022-03-15 2022-06-24 江西财经大学 Image salient target detection method and system based on uncertainty perception

Similar Documents

Publication Publication Date Title
US7668338B2 (en) Person tracking method and apparatus using robot
CN107194317B (en) Violent behavior detection method based on grid clustering analysis
CN104966285B (en) A kind of detection method of salient region
CN110717896A (en) Plate strip steel surface defect detection method based on saliency label information propagation model
Lee et al. An intelligent depth-based obstacle detection system for visually-impaired aid applications
CN111104943A (en) Color image region-of-interest extraction method based on decision-level fusion
CN111009005A (en) Scene classification point cloud rough registration method combining geometric information and photometric information
CN116758528B (en) Acrylic emulsion color change identification method based on artificial intelligence
CN105184771A (en) Adaptive moving target detection system and detection method
Swami et al. Candy: Conditional adversarial networks based fully end-to-end system for single image haze removal
CN114078138A (en) Image significance detection method and device
CN111274964A (en) Detection method for analyzing water surface pollutants based on visual saliency of unmanned aerial vehicle
KR101600617B1 (en) Method for detecting human in image frame
CN106446832B (en) Video-based pedestrian real-time detection method
He et al. A novel way to organize 3D LiDAR point cloud as 2D depth map height map and surface normal map
CN104408712B (en) Information fusion-based hidden Markov salient region detection method
Veeravasarapu et al. Fast and fully automated video colorization
CN111859022A (en) Cover generation method, electronic device and computer-readable storage medium
CN114078099A (en) Image significance detection method and device based on histogram gray feature pair
Wang et al. Improved cell segmentation with adaptive bi-Gaussian mixture models for image contrast enhancement pre-processing
Ravikumar et al. Comparison of SOM Algorithm and K-Means Clustering Algorithm in Image Segmentation
CN114463860B (en) Training method of detection model, living body detection method and related device
Aydin et al. A new object detection and classification method for quality control based on segmentation and geometric features
Du et al. Multi-level iris video image thresholding
Wan et al. Fast Image Dehazing Using Color Attributes Prior

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination