CN113658129B - Position extraction method combining visual saliency and line segment strength - Google Patents

Position extraction method combining visual saliency and line segment strength Download PDF

Info

Publication number
CN113658129B
CN113658129B CN202110934692.7A CN202110934692A CN113658129B CN 113658129 B CN113658129 B CN 113658129B CN 202110934692 A CN202110934692 A CN 202110934692A CN 113658129 B CN113658129 B CN 113658129B
Authority
CN
China
Prior art keywords
line segment
pixel
super
img
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110934692.7A
Other languages
Chinese (zh)
Other versions
CN113658129A (en
Inventor
常晓宇
王敏
王港
高峰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202110934692.7A priority Critical patent/CN113658129B/en
Publication of CN113658129A publication Critical patent/CN113658129A/en
Application granted granted Critical
Publication of CN113658129B publication Critical patent/CN113658129B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20036Morphological image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for extracting a position by combining visual saliency and line segment strength, belonging to the technical field of remote sensing image processing. Which comprises the following steps: dividing the remote sensing image into n superpixels to obtain a superpixel graph; processing the remote sensing image by using an LSD algorithm to obtain a line segment characteristic diagram; calculate Img L Segment in (1) and Img SP Superpixel line segment density where superpixels in the graph intersect; calculating the parallelism of the super pixel line segments; calculating the intensity of the super pixel line segment; calculating the super-pixel significance; obtaining a position intensity map by adopting a weighting fusion mode; intensity map of battle array I MP Carrying out threshold segmentation to obtain a position initial result graph Img MP (ii) a Performing morphological processing on the image; and acquiring a target frame to obtain a final position target identification result. The method can quickly extract the position target through the significance and the line segment characteristics without manual participation, and can effectively detect the position target without training samples.

Description

Position extraction method combining visual saliency and line segment strength
Technical Field
The invention belongs to the technical field of remote sensing image processing, and particularly relates to a method for extracting a place by combining visual saliency and line segment strength.
Background
With the rapid development of remote sensing technology, the spatial and temporal resolution of remote sensing images is higher and higher, so that the effectiveness and the practicability of remote sensing are greatly improved, and the remote sensing image is widely applied to the military and civil fields. The position is taken as an important military target, and the method has important research significance in the field of military interpretation and the field of information acquisition. How to better utilize the existing computer vision algorithm to identify the position on the remote sensing image so as to assist manual identification or reduce the manual workload and improve the identification efficiency becomes the key point of the current research.
The remote sensing image target recognition algorithm is roughly divided into two main types of research methods, one is that the characteristics of a target are utilized to construct a corresponding characteristic model, and target recognition and extraction are realized through characteristic fusion and threshold judgment; and the other type is that a supervised learning algorithm model is constructed by adopting training samples, and target detection is realized through the model inference. The first method mainly replaces the geometric texture information of the target by artificially designed features, and realizes target extraction according to edge information, linear features, texture features, significance and the like. A learner utilizes a remote sensing satellite to interpret by combining the airport position with human experience according to the spatial relationship between the airport and the position, and adds Hough straight line and edge characteristics to finally identify the position of the ground-space position, but the method still relies on manual interpretation to a greater extent, and the automation degree is low. The second type of algorithm mainly adopts a supervised learning algorithm, introduces a supervised learning mechanism and machine learning into an identification system, and constructs a classifier about the target through characteristics to identify the target. A typical machine learning model requires a set of features to be constructed for airport objects and then classified by a classifier. In recent years, deep learning is gradually applied to the field of image target recognition for extracting target high-level features and realizing target recognition and extraction. Some scholars identify the ground-space position through ResNet-101, but only use 100 samples of the position, and the robustness and robustness of the constructed model are to be low. Algorithms based on supervised detection can usually achieve better accuracy, but a large number of accurately labeled samples are needed to construct a model, so that the result and the model are often very task-specific and low-portability, and the labeling of the samples and the training of the model are time-consuming and computing resources.
Disclosure of Invention
The invention aims to provide a method for extracting an area by combining visual saliency and line segment strength, which can realize unsupervised detection and identification of an area target and realize identification of the area target without training samples.
The technical scheme adopted by the invention is as follows:
a method for extracting a position by combining visual saliency and line segment strength comprises the following steps:
step 1, using SEEDSThe remote sensing image is divided into n superpixels by an algorithm to obtain a superpixel graph Img SP
Step 2, processing the remote sensing image by using an LSD algorithm to obtain a line segment characteristic graph Img L
Step 3, calculating Img L Segment in (1) and Img SP Super pixel P in the figure k Line segment density LD, img of superpixel at intersection L Middle and super pixel P k The total number of crossed line segments is t;
step 4, calculating the parallelism of the super pixel line segment, and when t line segments and the super pixel P k When intersecting, the line segment parallelism LP is:
Figure BDA0003212434710000021
where LPC denotes the parallel relation function, Δ θ nm Is a line segment l n And l m The tilt angle difference of (a);
step 5, calculating the intensity SPLI of the super pixel line segment, wherein the SPLI is formed by overlapping the line segment parallelism LP and the line segment density LD;
step 6, obtaining a pixel-level visual saliency map VS of the original remote sensing image pixel Calculating the superpixel saliency VS OB
Step 7, obtaining a position intensity chart I based on a mode of adopting weighted fusion MP
Figure BDA0003212434710000022
Wherein the content of the first and second substances,
Figure BDA0003212434710000023
f () represents a normalization process on the image for weighting the adjustment factor;
step 8, aligning the ground intensity map I MP Carrying out threshold segmentation to obtain a position initial result graph Img MP The superpixel belonging to the position target is assigned with 1, and the background value is 0;
step 9, for the remote sensing shadowPerforming morphological processing on the image, wherein the morphological processing comprises four operations of hole filling, maximum area filtering, morphological closing operation and morphological opening operation to obtain an image after morphological processing
Figure BDA0003212434710000024
Step 10, calculating an image
Figure BDA0003212434710000025
And the closest rectangle of the medium target connected domain is used as a final identified target frame, and the target frame is superposed on the original remote sensing image to obtain a final position target identification result.
Further, the calculation method of the line segment density LD of the super pixel in step 3 is as follows:
Figure BDA0003212434710000026
in the formula I i Denotes Img L Line segment extracted in (1), N (P) k ∩l i ) Representing line segment l i And a super pixel P k Number of intersecting pixels, N (l) i ) Representing line segment l i All pixel numbers involved, Q (l) i ) Representing the weight of the line segment, and the calculation method comprises the following steps:
Q(l i )=L(l i )/L max
wherein, L (L) i ) Is a line segment l i Length of (L) max Is the maximum of the lengths in all line segments.
Further, the parallel relation function LPC in step 4 is specifically:
Figure BDA0003212434710000027
in the formula, α is an allowable error of the tilt angle difference.
Further, the calculation method of the super pixel line segment intensity SPLI in step 5 is as follows:
Figure BDA0003212434710000031
further, in step 6, the superpixel saliency VS is calculated OB The method comprises the following steps:
Figure BDA0003212434710000032
in the formula, VS OB Representing the significance intensity of the superpixel GBVS, and I (x, t) represents VS pixel Pixel value at (x, y) position in the figure, (x) k ,y k ) Then it represents a super pixel P k Coordinates of the picture elements, n representing a super-pixel P k The total number of picture elements involved.
Further, the threshold segmentation in step 8 is as follows:
Figure BDA0003212434710000033
where T is the threshold for determining whether the super-pixel belongs to a place, I MP (P k ) Representing a super pixel P k Intensity of the place of origin, img MP And (4) showing a place result graph of the initial identification, wherein the part of the pixel gray value of 1 represents the place, and the part of the pixel gray value of 0 represents the background value.
The invention has the following beneficial effects:
(1) The invention provides a method for extracting the position by combining the visual saliency and the line segment strength, which can effectively detect the position target without training samples.
(2) By applying the method, the position target can be quickly extracted through the significance and the line segment characteristics without manual participation.
Drawings
FIG. 1 is a schematic diagram of the principle of the method of extraction on-site.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and embodiments.
A method for extracting an area combining visual saliency and line segment strength comprises the following steps:
step 1, adopting SEEDS algorithm to segment the remote sensing image into n super-pixels to obtain a super-pixel image Img SP
Step 2, processing the remote sensing image by using an LSD algorithm to obtain a line segment characteristic graph Img L
Step 3, calculating t line segments (Img) L Line segment in) and P k (from Img) SP Superpixel in the figure) line segment density LD at the intersection;
step 4, calculating the parallelism of the super pixel line segment when t line segments and the super pixel P k When intersecting, the line segment parallelism LP is:
Figure BDA0003212434710000034
LPC represents the parallel relation function;
step 5, calculating the intensity SPLI of the super-pixel line segment, which is formed by superposing two parts of line segment parallelism LP and line segment density LD;
step 6, calculating the super-pixel saliency, wherein a pixel-level visual saliency map acquired from the original image is VS pixel The significance of superpixel GBVS is VS OB
Step 7, obtaining a position intensity map I based on a mode of adopting weighted fusion MP
Figure BDA0003212434710000041
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003212434710000042
f () represents a normalization process on the image for the weighting adjustment factor;
step 8 is to battle ground intensity map I MP Carrying out threshold segmentation to obtain a position initial result graph Img MP The superpixel belonging to the position target is assigned with 1, and the background value is 0;
step 9, performing morphological processing on the image, which mainly comprises hole filling, maximum area filtering and morphologyFour operations of closing operation and morphological opening operation are carried out to obtain morphologically processed images
Figure BDA0003212434710000043
Step 10 acquires a target frame. Post-computation processing graph
Figure BDA0003212434710000044
The closest rectangle of the middle target connected domain is used as a finally identified target frame, and the target frame is superposed on the original image to obtain a final position target identification result;
the super-pixel line segment density LD in step 3 is realized as follows:
Figure BDA0003212434710000045
in the formula I i Denotes Img L The line segment extracted in (1), N (P) k ∩l i ) Is a line segment l i And a super pixel P k Number of intersecting pixels, N (l) i ) Finger segment l i All the number of pixels involved. Q (l) i ) Representing the weight of the line segment, and the calculation method comprises the following steps: q (l) i )=L(l i )/L max ,L max Is the maximum of the lengths in all line segments.
The parallel relation function LPC in step 4 is calculated as follows:
Figure BDA0003212434710000046
in the formula,. DELTA.theta. nm Is a line segment l n And l m The difference in tilt angle of (d), LPC (Δ θ) nm ) Representing a difference in line segment inclination angle of Δ θ nm And alpha is the tilt angle difference tolerance.
The calculation formula of the super pixel line segment strength SPLI in the step 5 is as follows:
Figure BDA0003212434710000047
where SPLI represents the super-pixel line segment intensity, t represents the sum of the super-pixel P k Total number of intersecting line segments, LPC (Δ θ) nm ) Representing line segment l n And l m Angle of inclusion Delta theta nm A time-parallel relationship function.
From VS in step 6 pixel Calculating to obtain the visual saliency VS of the super-pixel OB The expression of (a) is:
Figure BDA0003212434710000048
in the formula, VS OB Representing the significance intensity of the super-pixel GBVS, and I (x, y) represents VS pixel Pixel value at (x, y) position in the figure, (x) k ,y k ) Then it is representative of a superpixel P k Coordinates of the picture elements, n representing a super-pixel P k The total number of picture elements involved.
The threshold segmentation operation in step 8 is represented as follows:
Figure BDA0003212434710000051
wherein T is a threshold value for judging whether the super pixel belongs to a place, img MP And the result graph of the initial identification position is shown, the position is shown by the gray value of a pixel being 1, and the background value is shown by the gray value of the pixel being 0.
The principle of the method is as follows:
the current mainstream position detection algorithm is mainly divided into two types, one type is that geometric texture information of a target is replaced by artificial design characteristics, and target extraction is realized according to edge information, linear characteristics, texture characteristics, significance and the like; the other type is that a supervised learning algorithm is used, a supervised learning mechanism and machine learning are introduced into a recognition system, and a classifier related to the target is constructed through features, so that the target is recognized. However, these identification methods have the following problems: 1) The unsupervised method relies on airport position identification position, and Hough straight line and edge characteristics are added, but the method still relies on manual interpretation to a large extent, and the automation degree is low; 2) The position detection algorithm based on the machine learning algorithm and the deep learning requires a large number of accurate mark samples to construct the model, so that the result and the model are often very large in task pertinence and low in transportability.
Aiming at the problems of difficult selection of a position sample on a remote sensing image and large position identification artificial dependency, the method combines visual saliency and target line segment characteristics to respectively calculate a superpixel GBVS saliency map and line segment strength, the GBVS saliency is a bottom-up saliency mechanism, the line segment strength is a feature map constructed according to the density of line segments in the remote sensing image and the spatial parallel relation position, and finally, the position intensity map extraction is realized by fusing the GBVS saliency map and the line segment strength map.
The following is a more specific example:
referring to fig. 1, a method for extracting a position by combining visual saliency and line segment strength includes the following specific steps:
step 1, adopting an SEEDS algorithm to segment a remote sensing image to obtain superpixels, wherein the expression is as follows:
Img SP =SEEDS n (Img0)
wherein Img0 represents an original remote sensing image, img SP The image divided by the super pixels is shown, the image is provided with super pixel boundary information, n represents the number of the super pixels of the image division, and SEEDS represents the super pixel division operation.
Step 2, calculating the LSD line segment characteristics of the original image, wherein the extraction process is as follows:
Img L =LSD(Img0)
in the formula, img L And expressing a line segment characteristic graph obtained by extracting the original remote sensing image, and expressing a line segment characteristic extraction algorithm by the LSD.
Step 3, calculating the line segment density LD of the super pixel, and aiming at the super pixel P k When there is t line segment (Img) L Line segment in) and P k (from Img) SP Superpixels in the graph), the line segment density is defined as:
Figure BDA0003212434710000052
wherein l i Denotes Img L Line segment extracted in (1), N (P) k ∩l i ) Is a line segment l i And a super pixel P k Number of intersecting pixels, N (l) i ) Finger segment l i All the number of pixels involved. Q (l) i ) Representing the weight of the line segment, and the calculation method comprises the following steps: q (l) i )=L(l i )/L max ,L max Is the maximum of the lengths in all line segments.
And 4, calculating the parallelism of the super pixel line segments. When there is t line segment (Img) L Line segment in) and P k (from Img) SP Superpixel in the figure), the line segment parallelism is defined as:
Figure BDA0003212434710000061
LPC represents a parallel relationship function, which is expressed as follows:
Figure BDA0003212434710000062
where Δ θ nm Is a line segment l n And l m Of the tilt angle difference, LPC (Δ θ) nm ) Representing a difference in line segment inclination angle of Δ θ nm And alpha is the tilt angle difference tolerance.
And 5, calculating the strength of the position line segment. When the line segment set L (t) = { L = 1 ,l 2 ,...,l t And superpixel P k When the line segments intersect, the super pixel line segment intensity SPLI is calculated as follows:
Figure BDA0003212434710000063
where SPLI represents the super-pixel line segment intensity, t represents the sum of the super-pixel P k Total number of intersecting line segments, LPC (Δ θ) nm ) Represents a line segment l n And l m Angle of inclusion Delta theta nm A time-parallel relationship function.
Step 6, calculating the super-pixel saliency, and setting a pixel-level visual saliency map acquired from the original image as VS pixel Then the super-pixel based visual saliency can be expressed as:
Figure BDA0003212434710000064
wherein VS OB Representing the significance strength of the super-pixel GBVS, U (x, y) representing VS pixel Pixel value at (x, y) position in the figure, (x) k ,y k ) Then it represents a super pixel P k Coordinates of the picture elements, n representing a super-pixel P k The total number of picture elements involved.
Step 7, obtaining a position intensity map I based on a mode of adopting weighted fusion MP
Figure BDA0003212434710000065
Wherein the content of the first and second substances,
Figure BDA0003212434710000066
for weighting the adjustment factors, F () represents the normalization of the images to ensure that the two signatures are of the same order of magnitude, the location intensity map I MP Is an intensity map expressed in units of super pixels.
And 8, performing threshold segmentation on the intensity map. And judging whether the super pixel belongs to a position scene or not by combining the gray value of the super pixel and a threshold T, wherein the judgment conditions are as follows:
Figure BDA0003212434710000067
wherein T is a threshold value for judging whether the super pixel belongs to the place, img MP And representing a primary identification position result image, wherein the position is represented by the pixel gray value of 1, and the background value is represented by the pixel gray value of 0.
And 9, image morphology processing. For result graph Img MP Morphological post-processing was performed, calculated as follows:
Figure BDA0003212434710000071
in the formula, HF, MA, MC and MO respectively represent the operations of cavity filling, maximum area filtering, morphology closing operation and morphology opening operation,
Figure BDA0003212434710000072
denotes Img MP Morphologically processed images.
And step 10, acquiring a target frame. Post-calculation processing graph
Figure BDA0003212434710000073
And the nearest rectangle of the medium target connected domain is used as a target frame for final identification, and the target frame is superposed on the original image to obtain a final position target identification result.
In general, when a viewer sees a remote sensing image, the viewer's attention is attracted to a salient region (such as a salient region of brightness, color, or contrast) in the image, which is how salient the image is. In recent years, visual saliency-based features are widely applied to the field of pattern recognition, and the method has a good application effect in extraction of the salient information of the remote sensing images. The visual saliency feature is a bottom-up saliency mechanism, and an algorithm simulating a human visual attention mechanism, which is proposed by Itti et al at an early stage, is a saliency analysis algorithm based on low-level visual features. Harel improves the classical Itti model, introduces a Markov chain to calculate the significance difference value, and proposes a graph-based significance (GBVS) algorithm from the view of graph theory. On the other hand, the human can easily identify the position from the remote sensing image, and the position mainly contains more parallel straight lines, short lanes, textures and other characteristics. This a priori knowledge based cognitive model is a top-down attention mechanism, knowledge-driven saliency.
Inspired by visual saliency and knowledge-driven saliency, the method aims at the problems of difficult selection of the position samples on the remote sensing images and high position identification artificial dependency, and combines the visual saliency and the characteristics of the target line segments to provide an automatic position identification and extraction method. In the method, the visual saliency of a target is constructed by the image GBVS saliency, a knowledge-driven saliency model is constructed by the line intensity characteristics through the line density and line spatial relation of the target, and finally a position intensity map is obtained by weighting and fusing the line intensity characteristics and the line density and the line spatial relation.
In a word, the position target detection algorithm provided by the invention can provide reference research content for remote sensing image position identification. According to the method, training samples do not need to be selected, an automatic threshold selection method is adopted, manual intervention identification is not needed, and technical support can be provided for automatic identification of a remote sensing image formation.

Claims (6)

1. A method for extracting a position by combining visual saliency and line segment strength is characterized by comprising the following steps:
step 1, segmenting the remote sensing image into n superpixels by adopting an SEEDS algorithm to obtain a superpixel graph Img SP
Step 2, processing the remote sensing image by using an LSD algorithm to obtain a line segment characteristic graph Img L
Step 3, calculating Img L Line segment in (1) and Img SP Super pixel P in the figure k Super pixel line segment density LD, img at intersection L Middle and super pixel P k The total number of crossed line segments is t;
step 4, calculating the parallelism of the super pixel line segment, and when t line segments and the super pixel P k When intersecting, the line segment parallelism LP is:
Figure FDA0003212434700000011
where LPC denotes the parallel relation function, Δ θ nm Is a line segment l n And l m The difference in tilt angle of (c);
step 5, calculating the SPLI of the super-pixel line segment, wherein the SPLI is formed by overlapping the line segment parallelism LP and the line segment density LD;
step 6, obtaining a pixel-level visual saliency map VS of the original remote sensing image pixel Calculating the superpixel saliency VS OB
Step 7, obtaining a position intensity chart I based on a mode of adopting weighted fusion MP
Figure FDA0003212434700000012
Wherein the content of the first and second substances,
Figure FDA0003212434700000013
f () represents a normalization process on the image for weighting the adjustment factor;
step 8, aligning the ground intensity map I MP Carrying out threshold segmentation to obtain a position initial result graph Img MP The superpixel belonging to the position target is assigned with 1, and the background value is 0;
step 9, performing morphological processing on the remote sensing image, wherein the morphological processing comprises four operations of hole filling, maximum area filtering, morphological closing operation and morphological opening operation, and obtaining an image after morphological processing
Figure FDA0003212434700000014
Step 10, calculating an image
Figure FDA0003212434700000015
And the connected rectangle of the medium target connected domain is used as a final identified target frame, and the target frame is superposed on the original remote sensing image to obtain a final position target identification result.
2. The method for extracting the position by combining the visual saliency and the line segment strength as claimed in claim 1, wherein the super pixel line segment density LD in step 3 is calculated as follows:
Figure FDA0003212434700000016
in the formula I i Denotes Img L The line segment extracted in (1), N (P) k ∩l i ) Representing line segment l i And a super pixel P k Number of intersecting pixels, N (l) i ) Represents a line segment l i All the number of pixels involved, Q (l) i ) Representing the weight of the line segment, and the calculation method comprises the following steps:
Q(l i )=L(l i )/L max
wherein, L (L) i ) Is a line segment l i Length of (L) max Is the maximum of the lengths in all line segments.
3. The method for extracting an array combining visual saliency and line segment strength according to claim 2, wherein the parallel relation function LPC in the step 4 is specifically:
Figure FDA0003212434700000021
in the formula, α is an allowable error of the tilt angle difference.
4. The method for extracting the position by combining the visual saliency and the line segment strength as claimed in claim 3, wherein the super pixel line segment strength SPLI in the step 5 is calculated as follows:
Figure FDA0003212434700000022
5. the method of claim 4, wherein the method for extracting the position based on the combination of the visual saliency and the line segment strength in step 6 comprises calculating the super-pixel saliency VS OB The method comprises the following steps:
Figure FDA0003212434700000023
in the formula, VS OB Representing the significance intensity of the super-pixel GBVS, and I (x, y) represents VS pixel The pixel value at the (x, y) position in the figure, (x) k ,y k ) Then it represents a super pixel P k Coordinates of the picture elements, n representing a super-pixel P k The total number of picture elements involved.
6. The method for extracting an area combining visual saliency and line segment strength as claimed in claim 5, wherein the threshold segmentation in step 8 is as follows:
Figure FDA0003212434700000024
wherein T is the threshold value of the super pixel belonging to the position judgment, I MP (P k ) Representing a super pixel P k Strength of the location, img MP And (4) representing a place result graph of the initial identification, wherein the part with the pixel gray value of 1 represents the place, and the part with the pixel gray value of 0 represents the background value.
CN202110934692.7A 2021-08-16 2021-08-16 Position extraction method combining visual saliency and line segment strength Active CN113658129B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110934692.7A CN113658129B (en) 2021-08-16 2021-08-16 Position extraction method combining visual saliency and line segment strength

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110934692.7A CN113658129B (en) 2021-08-16 2021-08-16 Position extraction method combining visual saliency and line segment strength

Publications (2)

Publication Number Publication Date
CN113658129A CN113658129A (en) 2021-11-16
CN113658129B true CN113658129B (en) 2022-12-09

Family

ID=78480358

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110934692.7A Active CN113658129B (en) 2021-08-16 2021-08-16 Position extraction method combining visual saliency and line segment strength

Country Status (1)

Country Link
CN (1) CN113658129B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114998740B (en) * 2022-06-13 2023-07-21 中国电子科技集团公司第五十四研究所 Airport linear feature extraction method based on line segment distribution
CN115909050B (en) * 2022-10-26 2023-06-23 中国电子科技集团公司第五十四研究所 Remote sensing image airport extraction method combining line segment direction and morphological difference

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930868A (en) * 2016-04-20 2016-09-07 北京航空航天大学 Low-resolution airport target detection method based on hierarchical reinforcement learning
CN107229917A (en) * 2017-05-31 2017-10-03 北京师范大学 A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930868A (en) * 2016-04-20 2016-09-07 北京航空航天大学 Low-resolution airport target detection method based on hierarchical reinforcement learning
CN107229917A (en) * 2017-05-31 2017-10-03 北京师范大学 A kind of several remote sensing image general character well-marked target detection methods clustered based on iteration

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
结合显著性检测和超像素分割的遥感信息提取算法研究;闫琦等;《计算机应用研究》;20170727(第07期);全文 *

Also Published As

Publication number Publication date
CN113658129A (en) 2021-11-16

Similar Documents

Publication Publication Date Title
Liu et al. DeepCrack: A deep hierarchical feature learning architecture for crack segmentation
JP7458328B2 (en) Multi-sample whole-slide image processing via multi-resolution registration
CN109522908B (en) Image significance detection method based on region label fusion
CN106250895B (en) A kind of remote sensing image region of interest area detecting method
CN108596102B (en) RGB-D-based indoor scene object segmentation classifier construction method
CN111931811B (en) Calculation method based on super-pixel image similarity
WO2019071976A1 (en) Panoramic image saliency detection method based on regional growth and eye movement model
CN113658129B (en) Position extraction method combining visual saliency and line segment strength
CN112950477B (en) Dual-path processing-based high-resolution salient target detection method
CN104899877A (en) Method for extracting image foreground based on super pixel and fast trimap image
CN113240691A (en) Medical image segmentation method based on U-shaped network
CN111611861B (en) Image change detection method based on multi-scale feature association
CN108230330B (en) Method for quickly segmenting highway pavement and positioning camera
Cheng et al. Efficient sea–land segmentation using seeds learning and edge directed graph cut
CN112257665A (en) Image content recognition method, image recognition model training method, and medium
CN116630971B (en) Wheat scab spore segmentation method based on CRF_Resunate++ network
CN105160686A (en) Improved scale invariant feature transformation (SIFT) operator based low altitude multi-view remote-sensing image matching method
CN115170805A (en) Image segmentation method combining super-pixel and multi-scale hierarchical feature recognition
CN112241693A (en) Illegal welding fire image identification method based on YOLOv3
CN106022342A (en) Image feature extraction method based on KAZE algorithm
CN115272306A (en) Solar cell panel grid line enhancement method utilizing gradient operation
CN113706562A (en) Image segmentation method, device and system and cell segmentation method
Bhadoria et al. Image segmentation techniques for remote sensing satellite images
Deng et al. Automatic calibration of crack and flaking diseases in ancient temple murals
CN111931689B (en) Method for extracting video satellite data identification features on line

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant