CN111666952B - Label context-based salient region extraction method and system - Google Patents

Label context-based salient region extraction method and system Download PDF

Info

Publication number
CN111666952B
CN111666952B CN202010441556.XA CN202010441556A CN111666952B CN 111666952 B CN111666952 B CN 111666952B CN 202010441556 A CN202010441556 A CN 202010441556A CN 111666952 B CN111666952 B CN 111666952B
Authority
CN
China
Prior art keywords
label
image
tag
training
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010441556.XA
Other languages
Chinese (zh)
Other versions
CN111666952A (en
Inventor
梁晔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Tengxin Soft Innovation Technology Co ltd
Original Assignee
Beijing Tengxin Soft Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Tengxin Soft Innovation Technology Co ltd filed Critical Beijing Tengxin Soft Innovation Technology Co ltd
Priority to CN202010441556.XA priority Critical patent/CN111666952B/en
Publication of CN111666952A publication Critical patent/CN111666952A/en
Application granted granted Critical
Publication of CN111666952B publication Critical patent/CN111666952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Abstract

The invention provides a salient region extraction method and a salient region extraction system based on label context, wherein the method comprises a training step and a testing step, and the training step comprises the following steps: training an image set I, wherein the image set I comprises Q images; each image is provided with label information, a label set T comprises N labels, and a reference significant atlas S corresponding to a training image set I; for image I in training set j Reading the corresponding reference saliency map as S j The corresponding tag sequence isImage I j Mapping the corresponding tag sequences to different areas of the image to obtain a segmented area sequence; from image I j Reference saliency map S of (1) j Calculating a salient value corresponding to each region in the segmented region sequenceCalculating the correlation of the labels according to the significant value sequence of the segmentation area sequence; performing the above calculation on each image of the training set to obtain a set M of influence factor matrixes; calculating an average influence factor matrix of the matrix set M

Description

Label context-based salient region extraction method and system
Technical Field
The invention relates to the technical field of computer vision, in particular to a method and a system for extracting a salient region based on label context.
Background
Attention is directed to the cognitive processes in humans, which are psychological concepts and are important components of visual perception. The significance detection by computer simulation attention mechanism relates to the related fields of psychology, neuroscience, biological vision, computer vision and the like, and is a multidisciplinary intersection research field. Visual attention mechanisms fall into two main categories: a bottom-up data-driven pre-attention mechanism and a top-down task-driven post-attention mechanism. Accordingly, the saliency detection method can be classified into a bottom-up detection method and a top-down detection method according to the classification of visual attention mechanisms. With the development of research, researchers find that simple image-dependent features such as color, shape, texture and the like are insufficient for significant region extraction, so that more and more researchers assist in the calculation of significant regions by using external information of images. The labels of the images are very important external cue information. Although the semantics of labels have been widely used in the field of image labeling, label information is usually handled separately from salient object extraction tasks, and little work is done on salient object extraction.
Document [ Wen Wang, congyan Lang, songhe feng.contextualizng Tag Rankingand Saliency Detection for Social images.Advances in Multimedia ModelingLecture Notes in Computer Science Volume 7733, 2013, pp 428-435 ] integrates tag ordering tasks and saliency detection tasks, iteratively performing tag ordering and saliency detection tasks. Literature [ Zhu, g., wang, q., yuan, y. Tag-saliency: coupling bottom-up and top-down information for saliency detection computer Vision and ImageUnderstanding,2014, 118 (1): 40-49 labeling of multimedia data is performed by hierarchical based over-passing and automatic labeling techniques. A common disadvantage of both documents is that neither takes into account the contextual relationship between the tags.
Disclosure of Invention
In order to solve the technical problems, the method and the system for extracting the salient region based on the label context provided by the invention introduce the semantic information of the label into the salient region extraction work of the image as an important external clue of the image, and consider the context relation among the labels.
A first object of the present invention is to provide a salient region extraction method based on label context, comprising a training step and a testing step, the training step comprising the following sub-steps: : step 01: training an image set I, wherein the image set I comprises Q images; each image is provided with label information, a label set T comprises N labels, and a reference saliency atlas S corresponding to the training image set I;
step 02: for image I in training set j Reading the corresponding reference saliency map as S j The corresponding tag sequence isWherein (1)>The situation that the label i appears in all training pictures;
step 03: image I j Mapping the corresponding tag sequence to different areas of the image, namely dividing the image to obtain a divided area sequence of
Step 04: from image I j Reference saliency map S of (1) j Calculating a salient value corresponding to each region in the segmented region sequence
Step 05: a sequence of saliency values from the sequence of segmented regionsCalculating the correlation of the labels;
step 06: by performing the calculation from step 02 to step 05 on each image of the training set, a set m= { M of influence factor matrices is obtained 1 ,M 2 ,...,M i ,...,M N };
Step 07: calculating an average influence factor matrix of the matrix set M
Preferably, when the vectorWhen the ith label appears; when->When this indicates that the ith tag is not present. .
In any of the above aspects, it is preferable that the divided regionSignificant value of->The calculation method is->Equal areaAverage value of the salient values corresponding to all elements in the sequence to obtain a salient value sequence corresponding to the region sequence +.>
In any of the above schemes, it is preferable that the step 06 is to infer a context relationship between two tags by using salient values of regions corresponding to the two tags.
In any of the above aspects, preferably, the method for calculating the correlation of the tag includes:
1) If the areas corresponding to the label m and the label n are different, andm is not less than 1 and not more than N, N is not less than 1 and not more than N, and influence factor alpha between label m and label N mn =0,α nm =0, it is considered that there is no interaction between tag m and tag n;
2) If a label appearsm is different from the region corresponding to the label n, andm is not less than 1 and not more than N, N is not less than 1 and not more than N, and the label m and the label N are considered to be mutually influenced; when->When the label n has promotion effect on the label m, the label m has inhibition effect on the label n; when->When it is indicated that the label n has an inhibitory effect on the label m, the label m has a promoting effect on the label n, wherein +.>
3) If the areas corresponding to the label m and the label n are the same, alpha mn =0;
4) The influence factor of the label m on the label m is alpha mm =0;
5) If tag m and tag n are not present at the same time or are not present at the same time, then α is specified mn =0。
In any of the above schemes, it is preferable that the influence factor matrix M between the labels of the training image j is obtained after all the influence factor calculations are completed j
Wherein M is j Is an N x N square matrix.
In any of the above embodiments, preferably, the step 07 further includes calculating an average influence factor matrix of the matrix set MThe calculation formula of (2) is as follows:
wherein the influence factor of the jth tag on all other tags is as followsβ ji Representing the mean influence factor matrix->Is an element of (a) in the above-mentioned formula (b).
In any of the above schemes, preferably, the testing phase includes the following sub-steps:
step 11: calculating a saliency map s of a test image img img
Step 12: mapping a label sequence corresponding to the test image img to the image area to obtain a segmented area set
Step 13: by averaging the influence factor matrixAnd adjusting the saliency value of the test image to obtain a final saliency map of the test image img.
In any of the above schemes, preferably, the adjusting method is as follows: the saliency value of pixel p in the test image isThe region label corresponding to the pixel p is the x-th p The number of the labels is not less than 1 and not more than x p N is less than or equal to the nth p The influence factor of the individual tag by all other tags is +.>Wherein, i is more than or equal to 1 and less than or equal to N, the modified significant value of the pixel p is +.>All elementsAnd normalizing the salient values of the images after the adjustment is finished.
A second object of the present invention is to provide a salient region extraction system based on a label context, including a training module and a testing module, wherein a training method of the training module includes the following sub-steps:
step 01: training an image set I, wherein the image set I comprises Q images; each image is provided with label information, a label set T comprises N labels, and a reference saliency atlas S corresponding to the training image set I;
step 02: for image I in training set j Reading the corresponding reference saliency map as S j The corresponding tag sequence isWherein (1)>The situation that the label i appears in all training pictures;
step 03: image I j Mapping the corresponding tag sequence to different areas of the image, namely dividing the image to obtain a divided area sequence of
Step 04: from image I j Reference saliency map S of (1) j Calculating a salient value corresponding to each region in the segmented region sequence
Step 05: a sequence of saliency values from the sequence of segmented regionsCalculating the correlation of the labels;
step 06: by performing the calculation from step 02 to step 05 on each image of the training set, a set m= { M of influence factor matrices is obtained 1 ,M 2 ,...,M i ,...,M N };
Step 07: calculating an average influence factor matrix of the matrix set M
Preferably, when the vectorWhen the ith label appears; lan->When this indicates that the ith tag is not present. .
In any of the above aspects, it is preferable that the divided regionSignificant value of->The calculation method is->Equal to area->And the average value of the saliency values corresponding to all elements to obtain a saliency value sequence corresponding to the region sequence +.>
In any of the above schemes, it is preferable that the step 06 is to infer a context relationship between two tags by using salient values of regions corresponding to the two tags.
In any of the above aspects, preferably, the method for calculating the correlation of the tag includes:
1) If the areas corresponding to the label m and the label n are different, andm is not less than 1 and not more than N, N is not less than 1 and not more than N, and influence factor alpha between label m and label N mn =0,α nm =0, it is considered that there is no interaction between tag m and tag n;
2) If the areas corresponding to the label m and the label n are different, andm is not less than 1 and not more than N, N is not less than 1 and not more than N, and the label m and the label N are considered to be mutually influenced; when->When the label n has promotion effect on the label m, the label m has inhibition effect on the label n; when->For, it means that tag n has an inhibitory effect on tag m, which has a promoting effect on tag n, wherein +.>
3) If the areas corresponding to the label m and the label n are the same, alpha mn =0;
4) The influence factor of the label m on the label m is alpha mm =0;
5) If tag m and tag n are not present at the same time or are not present at the same time, then α is specified mn =0。
In any of the above schemes, it is preferable that the influence factor matrix M between the labels of the training image j is obtained after all the influence factor calculations are completed j
Wherein M is j Is an N x N square matrix.
In any of the above embodiments, preferably, the step 07 further includes calculating a matrix setM average influence factor matrixThe calculation formula of (2) is as follows:
wherein the influence factor of the jth tag on all other tags is as followsβ ji Representing the mean influence factor matrix->Is an element of (a) in the above-mentioned formula (b).
In any of the above schemes, preferably, the testing method of the testing module includes the following substeps: :
step 11: calculating a saliency map s of a test image img img
Step 12: mapping a label sequence corresponding to the test image img to the image area to obtain a segmented area setStep 13: by averaging the influence factor matrix->And adjusting the saliency value of the test image to obtain a final saliency map of the test image img.
In any of the above schemes, preferably, the adjusting method is as follows: the saliency value of pixel p in the test image isThe region label corresponding to the pixel p is the x-th p The number of the labels is not less than 1 and not more than x p N is less than or equal to the nth p The influence factor of the individual tag by all other tags is +.>Wherein, i is more than or equal to 1 and less than or equal to N, the modified significant value of the pixel p is +.>And normalizing the salient values of the images after all the elements are adjusted.
The invention provides a salient region extraction method and a salient region extraction system based on label context, which are used for extracting salient objects of an image by combining the context relation among labels with low-level features of the image, so as to improve the extraction effect of salient regions.
Drawings
Fig. 1 is a flow chart of a preferred embodiment of a label context based salient region extraction method in accordance with the present invention.
Fig. 2 is a block diagram of a preferred embodiment of a label context based salient region extraction system in accordance with the present invention.
Fig. 3 is a flow chart of a test method of the embodiment shown in fig. 1 of a label context based salient region extraction method in accordance with the present invention.
Fig. 4 is an exemplary illustration of a preferred embodiment of a corresponding reference saliency map labeling and corresponding set of labels of a label context based saliency region extraction method according to the present invention.
Fig. 5 is a schematic diagram of a preferred embodiment of the influence factor calculation method of the salient region extraction method based on the label context according to the present invention.
Fig. 6 is a saliency map before and after adjustment of a preferred embodiment of a label context based salient region extraction method in accordance with the present invention.
Detailed Description
The invention is further illustrated by the following figures and specific examples.
Example 1
As shown in fig. 1 and 2, step 100 is performed and training module 200 performs a training step. In the training step, step 101 is performed, training an image set I comprising Q images. Each image is provided with label information, a label set T comprises N labels, and a reference saliency atlas S corresponding to the training image set I is provided.
Step 102 is performed for image I in the training set j Reading the corresponding reference saliency map as S j The corresponding tag sequence isWherein (1)>Is the case where tag i appears in all training pictures. When the vector->When the ith label appears; when->When this indicates that the ith tag is not present.
Step 103 is executed to convert the image I j Mapping the corresponding tag sequence to different areas of the image, namely dividing the image to obtain a divided area sequence of
Step 104 is executed according to image I j Reference saliency map S of (1) j Calculating the sequence of the divided regions asA saliency value corresponding to each region +.>Division area->Significant value of->The calculation method is->Equal to area->And the average value of the saliency values corresponding to all elements to obtain a saliency value sequence corresponding to the region sequence +.>
Step 105 is executed according to the sequence of segmented regionsIs a sequence of saliency values of (a)The correlation of the tags is calculated. The context relationship between the two labels is inferred by the salient values of the regions to which the two labels correspond. The method for calculating the correlation of the labels comprises the following steps:
1) If the areas corresponding to the label m and the label n are different, andm is not less than 1 and not more than N, N is not less than 1 and not more than N, and influence factor alpha between label m and label N mn =0,α nm =0, it is considered that there is no interaction between tag m and tag n;
2) If the areas corresponding to the label m and the label n are different, andm is not less than 1 and not more than N, N is not less than 1 and not more than N, and the label m and the label N are considered to be mutually influenced; when->When the label n has promotion effect on the label m, the label m has inhibition effect on the label n; when->When it is indicated that the label n has an inhibitory effect on the label m, the label m has a promoting effect on the label n, wherein +.>
3) If the areas corresponding to the label m and the label n are the same, alpha mn =0;
4) The influence factor of the label m on the label m is alpha mm =0;
5) If tag m and tag n are not present at the same time or are not present at the same time, then α is specified mn =0。
When all the influence factor calculation is completed, an influence factor matrix M between labels of the training image j is obtained j
Wherein M is j Is an N x N square matrix.
Step 106 is performed to determine whether a calculation has been performed for each image of the training set. If no calculation has been made for each image of the training set, step 102 is re-performed. If each image of the training set is computed, step 107 is performed to obtain a set of influence factor matrices, m= { M 1 ,M 2 ,...,M i ,...,M N }。
Step 108 is executed to calculate the average influence factor of the matrix set MCalculating an average influence factor matrix of the matrix set M> The calculation formula of (2) is as follows:
wherein the influence factor of the jth tag on all other tags is as followsβ ji Representing the mean influence factor matrix->Is an element of (a) in the above-mentioned formula (b).
Step 110 is performed and the test module 210 performs the test step. In the test step, as shown in fig. 3, step 111 is performed to calculate a saliency map s of the test image img img . Step 112 is executed to map the label sequence corresponding to the test image img to the image region to obtain a set of segmented regionsStep 113 is performed by averaging the influence factor matrix +.>And adjusting the saliency value of the test image to obtain a final saliency map of the test image img. The adjusting method comprises the following steps: the saliency value of pixel p in the test image is +.>The region label corresponding to the pixel p is the x-th p The number of the labels is not less than 1 and not more than x p N is less than or equal to the nth p The influence factor of the individual tag by all other tags is +.>Wherein, i is more than or equal to 1 and less than or equal to N, the modified significant value of the pixel p is +.>And normalizing the salient values of the images after all the elements are adjusted.
In the present embodiment, only one test procedure scheme is exemplified, and other test schemes having the same function may be used instead, not limited to the scheme in the present embodiment.
Example two
Aiming at the current situation that the research of the label information in the extraction of the salient region of the image is insufficient, the main work of the invention is to introduce the semantic information of the label into the extraction of the salient region of the image as an important external clue of the image, consider the context relation among the labels, combine with the low-level characteristics of the image to extract the salient object of the image, and improve the extraction effect of the salient region.
The training phase of the method comprises the following steps:
1. training an image set I, wherein the training image set I contains Q images; each image carries tag information, and a tag set T comprises N tags; while the corresponding reference gallery S.
2. For image I in training set j Reading the corresponding reference saliency map as S j The corresponding tag sequence isWhen the vector->When the ith label appears; when->When this indicates that the ith tag is not present.
3. Image I j Mapping the corresponding tag sequence to different areas of the image, namely dividing the image to obtain a divided area sequence of
4. From image I j Reference saliency map S of (1) j Calculating the sequence of the divided regions asA significance value corresponding to each region, < ->Significant value of->The calculation method comprises the following steps: />Thereby obtaining a salient value sequence corresponding to the region sequence +.>
5. According to the divided areasSequence of saliency values->And calculating the relevance of the labels, namely reasoning the context relation between the two labels through the salient values of the areas corresponding to the two labels. The calculation method comprises the following steps: (1) If the region corresponding to the tag m and the tag n are different, and +.>M is not less than 1 and not more than N, N is not less than 1 and not more than N, and influence factor alpha between label m and label N mn =0,α nm =0, it is considered that there is no interaction between tag m and tag n; if->M is not less than 1 and not more than N, N is not less than 1 and not more than N, and the label m and the label N are considered to be mutually influenced; if->1.ltoreq.m.ltoreq.N, 1.ltoreq.n, the influence between tag m and tag N being due to +.>The label n has a promoting effect on the label m, and the label m has a inhibiting effect on the label n; (2) If the areas corresponding to the label m and the label n are the same, alpha mn =0; (3) Specifying the influence factor of the label m on the label m as alpha mm =0; (4) If tag m and tag n are not present at the same time or are not present at the same time, then α is specified mn =0. After all the influence factors are calculated, an influence factor matrix M between the labels is obtained j ,M j Is an N x N square matrix.
6. By performing the computation from step 2 to step 5 on each image of the training set, a set m= { M of influence factor matrices is obtained 1 ,M 2 ,...,M i ,...,M N Computing an average influence factor matrix of the matrix set MThe calculation formula of (2) is as follows:
wherein the influence factor of the jth tag on all other tags is as follows
Testing:
1. the test image is img, and a saliency map s of the test image img is calculated img
2. Mapping a label sequence corresponding to the test image img to the image area to obtain a segmented area set
3. By averaging the influence factor matrixThe significant value of the test image is adjusted, and the adjustment method comprises the following steps:
the saliency value of pixel p in the test image isThe region label corresponding to the pixel p is the x-th p The number of the labels is not less than 1 and not more than x p N is less than or equal to the nth p The influence factor of the individual tag by all other tags is +.>1.ltoreq.i.ltoreq.N, the pixel p has a modified saliency value of +.>And normalizing the salient values of the images after all the elements are adjusted.
4. A final saliency map of the test image img is obtained.
Example III
As shown in fig. 4, a set of images, corresponding reference saliency map labels, and corresponding labels are illustrated.
The leftmost column is the original image, namely the test image; the middle column is a reference saliency map label map corresponding to the original image, and the right column is a set of labels corresponding to the harness map.
Example IV
The embodiment describes a method for calculating an influence factor matrix of a tag.
As shown in fig. 5, the tags total 4: animal, cat, person and grass, the serial numbers of the 4 tags are specified as 1, 2, 3 and 4 in order.
In the first diagram, two labels animal, cat appear, where the animal and cat corresponding regions are identical, so α 12 =0; labels appearing in the second image are person and grass, the saliency value for the person corresponding region is 0.8, the saliency value for the grass corresponding region is 0.1, so α 34 =0.7;α 43 -0.7. According to the inventionExplicit calculation method, also alpha 11 =0;α 13 =0;α 14 =0;α 21 =0;α 22 =0;α 23 =0;α 24 =0;α 31 =0;α 32 =0;α 33 =0;α 41 =0;α 42 =0;α 44 =0。
The influence factor matrix is
Example five
This embodiment illustrates a saliency map of an image and an adjusted saliency map.
The first column is the label carried by the image, the second column is the original image, the third column is the saliency map without considering the label context, the fourth column is the saliency map with considering the label context, and the salient objects in the saliency map with considering the label context can be seen to be more salient.
The foregoing description of the invention has been presented for purposes of illustration and description, but is not intended to be limiting. Any simple modification of the above embodiments according to the technical substance of the present invention still falls within the scope of the technical solution of the present invention. In this specification, each embodiment is mainly described in the specification as a difference from other embodiments, and the same or similar parts between the embodiments need to be referred to each other. For system embodiments, the description is relatively simple as it essentially corresponds to method embodiments, and reference should be made to the description of method embodiments for relevant points.

Claims (7)

1. A method for extracting salient regions based on label context, comprising a training step and a testing step, characterized in that the training step comprises the following substeps:
step 01: training an image set I, wherein the image set I comprises Q images; each image is provided with label information, a label set T comprises N labels, and a training image set I corresponds to a reference significant image set S;
step 02: for image I in training set j Reading the corresponding reference saliency map as S j The corresponding tag sequence isWherein (1)>The situation that the label i appears in all training pictures;
step 03: image I j Mapping the corresponding tag sequence to different areas of the image, namely dividing the image to obtain a divided area sequence of
Step 04: from image I j Reference saliency map S of (1) j Calculating a salient value corresponding to each region in the segmented region sequence
Step 05: according to the sequence of the divided regionsSequence of saliency values->Calculating the correlation of the labels; the method for calculating the correlation of the labels comprises the following steps:
1) If the areas corresponding to the label m and the label n are different, and influence factor alpha between tag m and tag n mn =0,α nm =0, it is considered that there is no interaction between tag m and tag n;
2) If the areas corresponding to the label m and the label n are different, and then it is considered that label m and label n interact with each other; when->When the label n has promotion effect on the label m, the label m has inhibition effect on the label n; when->When it is indicated that the label n has an inhibitory effect on the label m, the label m has a promoting effect on the label n, wherein +.>
3) If the areas corresponding to the label m and the label n are the same, alpha mn =0;
4) The influence factor of the label m on the label m is alpha mm =0;
5) If tag m and tag n are not present at the same time or are not present at the same time, then α is specified mn =0; step 06: performing calculation in steps 02 to 05 on each image of the training set to obtain a set M= { M of the influence factor matrix 1 ,M 2 ,…,M i ,…,M N -a }; when all the influence factor calculation is completed, an influence factor matrix M between labels of the training image j is obtained j
Wherein M is j Is an N x N square matrix;
step 07: calculating an average influence factor matrix of the matrix set M
2. The salient region extraction method based on label context of claim 1, wherein whenWhen the ith label appears; when->When this indicates that the ith tag is not present.
3. The salient region extraction method based on label context according to claim 2, wherein the segmentation region r i j Significant value of (2)The calculation method is->Equal to region r i j Average value of the salient values corresponding to all elements in the sequence to obtain a salient value sequence corresponding to the region sequence +.>
4. A salient region extraction method based on label context as claimed in claim 3, wherein said step 06 is to infer a context relationship between two labels by salient values of regions corresponding to the two labels.
5. The label context based salient region extraction method of claim 4, wherein the average impact factor matrixThe calculation formula of (2) is as follows:
wherein beta is ij Representing an average influence factor matrixIs an element of (a) in the above-mentioned formula (b).
6. The label context based salient region extraction method of claim 5, the test phase comprising the substeps of:
step 11: calculating a saliency map s of a test image img img
Step 12: mapping a label sequence corresponding to the test image img to the image area to obtain a segmented area set
Step 13: by averaging the influence factor matrixThe saliency value of the test image is adjusted to obtain a final saliency map of the test image img, and the adjustment method comprises the following steps: the saliency value of pixel p in the test image is +.>The region label corresponding to the pixel p is the x-th p The number of the labels is not less than 1 and not more than x p N is less than or equal to the nth p The influence factor of the individual tag by all other tags is +.> Then image is likeThe significance value after correction of the prime p is +.>And normalizing the salient values of the images after all the elements are adjusted.
7. The salient region extraction system based on the label context comprises a training module and a testing module, and is characterized in that the training method of the training module comprises the following substeps:
step 01: training an image set I, wherein the image set I comprises Q images; each image is provided with label information, a label set T comprises N labels, and a training image set I corresponds to a reference significant image set S;
step 02: for image I in training set j Reading the corresponding reference saliency map as S j The corresponding tag sequence isWherein (1)>The situation that the label i appears in all training pictures;
step 03: image I j Mapping the corresponding tag sequence to different areas of the image, namely dividing the image to obtain a divided area sequence of
Step 04: from image I j Reference saliency map S of (1) j Calculating a salient value corresponding to each region in the segmented region sequence
Step 05: a sequence of saliency values from the sequence of segmented regions Calculating the correlation of the labels; the method for calculating the correlation of the labels comprises the following steps:
1) If the areas corresponding to the label m and the label n are different, and influence factor alpha between tag m and tag n mn =0,α nm =0, it is considered that there is no interaction between tag m and tag n;
2) If the areas corresponding to the label m and the label n are different, and then it is considered that label m and label n interact with each other; when->When the label n has promotion effect on the label m, the label m has inhibition effect on the label n; when->When it is indicated that the label n has an inhibitory effect on the label m, the label m has a promoting effect on the label n, wherein +.>
3) If the areas corresponding to the label m and the label n are the same, alpha mn =0;
4) The influence factor of the label m on the label m is alpha mm =0;
5) If tag m and tag n are not present at the same time or are not present at the same time, then α is specified mn =0;
Step 06: performing calculation in steps 02 to 05 on each image of the training set to obtain a set M= { M of the influence factor matrix 1 ,M 2 ,…,M i ,…,M N -a }; when all the influence factor calculation is completed, an influence factor matrix M between labels of the training image j is obtained j
Wherein M is j Is an N x N square matrix;
step 07: and calculates an average influence factor matrix of the matrix set M
CN202010441556.XA 2020-05-22 2020-05-22 Label context-based salient region extraction method and system Active CN111666952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010441556.XA CN111666952B (en) 2020-05-22 2020-05-22 Label context-based salient region extraction method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010441556.XA CN111666952B (en) 2020-05-22 2020-05-22 Label context-based salient region extraction method and system

Publications (2)

Publication Number Publication Date
CN111666952A CN111666952A (en) 2020-09-15
CN111666952B true CN111666952B (en) 2023-10-24

Family

ID=72384374

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010441556.XA Active CN111666952B (en) 2020-05-22 2020-05-22 Label context-based salient region extraction method and system

Country Status (1)

Country Link
CN (1) CN111666952B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228544A (en) * 2016-07-14 2016-12-14 郑州航空工业管理学院 A kind of significance detection method propagated based on rarefaction representation and label
CN107967480A (en) * 2016-10-19 2018-04-27 北京联合大学 A kind of notable object extraction method based on label semanteme
CN107977948A (en) * 2017-07-25 2018-05-01 北京联合大学 A kind of notable figure fusion method towards sociogram's picture
CN110853053A (en) * 2019-10-25 2020-02-28 天津大学 Salient object detection method taking multiple candidate objects as semantic knowledge

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105989174B (en) * 2015-03-05 2019-11-01 欧姆龙株式会社 Region-of-interest extraction element and region-of-interest extracting method
JP6756406B2 (en) * 2016-11-30 2020-09-16 日本電気株式会社 Image processing equipment, image processing method and image processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106228544A (en) * 2016-07-14 2016-12-14 郑州航空工业管理学院 A kind of significance detection method propagated based on rarefaction representation and label
CN107967480A (en) * 2016-10-19 2018-04-27 北京联合大学 A kind of notable object extraction method based on label semanteme
CN107977948A (en) * 2017-07-25 2018-05-01 北京联合大学 A kind of notable figure fusion method towards sociogram's picture
CN110853053A (en) * 2019-10-25 2020-02-28 天津大学 Salient object detection method taking multiple candidate objects as semantic knowledge

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A novel deep network and aggregation model for saliency detection;Ye Liang et al.;《The Visual Computer》;20191209;全文 *
Recurrent learning of context for salient region detection;Chunling Wu;《Personal and Ubiquitous Computing》;20180619;全文 *
基于视觉显著性的显著区域提取方法及其应用研究;梁晔;《中国博士学位论文全文数据库 信息科技辑(月刊)》;20180615;全文 *

Also Published As

Publication number Publication date
CN111666952A (en) 2020-09-15

Similar Documents

Publication Publication Date Title
Bao et al. Multi-scale structured CNN with label consistency for brain MR image segmentation
CN112163634B (en) Sample screening method and device for instance segmentation model, computer equipment and medium
CN109858476B (en) Tag expansion method and electronic equipment
CN112365980B (en) Brain tumor multi-target auxiliary diagnosis and prospective treatment evolution visualization method and system
CN111932547B (en) Method and device for segmenting target object in image, electronic device and storage medium
CN111161249B (en) Unsupervised medical image segmentation method based on domain adaptation
CN113379764B (en) Pathological image segmentation method based on domain antagonism self-supervision learning
CN114549470B (en) Hand bone critical area acquisition method based on convolutional neural network and multi-granularity attention
Lee et al. Bi-directional contrastive learning for domain adaptive semantic segmentation
CN113111716B (en) Remote sensing image semiautomatic labeling method and device based on deep learning
CN111680757A (en) Zero sample image recognition algorithm and system based on self-encoder
CN113065609B (en) Image classification method, device, electronic equipment and readable storage medium
Yang et al. A multiorgan segmentation model for CT volumes via full convolution-deconvolution network
Zhang et al. Learning to segment when experts disagree
CN111523578B (en) Image classification method and device and neural network model training method and device
CN110660480B (en) Auxiliary diagnosis method and system for spine dislocation
CN115546466A (en) Weak supervision image target positioning method based on multi-scale significant feature fusion
CN114066905A (en) Medical image segmentation method, system and device based on deep learning
CN111666952B (en) Label context-based salient region extraction method and system
CN113052236A (en) Pneumonia image classification method based on NASN
CN117541836A (en) Multi-mode medical image unsupervised characterization learning method
Loh et al. Semi-automated quantitative Drosophila wings measurements
Stanley et al. An image feature-based approach to automatically find images for application to clinical decision support
CN116433704A (en) Cell nucleus segmentation method based on central point and related equipment
CN113936147A (en) Method and system for extracting salient region of community image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20230823

Address after: 1-1201-1, 12th Floor, No. 87 West Third Ring North Road, Haidian District, Beijing, 100048

Applicant after: Beijing Tengxin soft Innovation Technology Co.,Ltd.

Address before: 100101, No. 97 East Fourth Ring Road, Chaoyang District, Beijing

Applicant before: Beijing Union University

GR01 Patent grant
GR01 Patent grant