CN109583450A - Salient region detecting method based on feedforward neural network fusion vision attention priori - Google Patents

Salient region detecting method based on feedforward neural network fusion vision attention priori Download PDF

Info

Publication number
CN109583450A
CN109583450A CN201811423906.9A CN201811423906A CN109583450A CN 109583450 A CN109583450 A CN 109583450A CN 201811423906 A CN201811423906 A CN 201811423906A CN 109583450 A CN109583450 A CN 109583450A
Authority
CN
China
Prior art keywords
pixel
super
characteristic
priori
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811423906.9A
Other languages
Chinese (zh)
Inventor
张金霞
魏海坤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN201811423906.9A priority Critical patent/CN109583450A/en
Publication of CN109583450A publication Critical patent/CN109583450A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of salient region detecting methods based on feedforward neural network fusion vision attention priori, and input picture is divided into multiple mutually disjoint super-pixel;It is that each super-pixel calculates rarity characteristic value based on different context areas according to the rarity of the low level priori characteristic of vision;According to the contrast-response characteristic of the low level priori characteristic of vision, its contrast metric value between different net regions is calculated for each super-pixel;According to the center biasing characteristic of the low level priori characteristic of vision, its space length characteristic value between image center is calculated for each super-pixel;The high-level priori characteristic of each super-pixel is modeled based on existing depth network model, obtains corresponding high-level priori features value;Low level priori characteristic and high-level priori characteristic are merged using multilayer feedforward neural network, a possibility that each super-pixel belongs to significant class is calculated, to acquire final notable figure.Well-marked target in the effective detection image of the present invention.

Description

Salient region detecting method based on feedforward neural network fusion vision attention priori
Technical field
The present invention relates to technical field of image processing, especially a kind of to merge vision attention priori based on feedforward neural network Salient region detecting method.
Background technique
Since vision significance detection originates from the vision noticing mechanism of human eye, have many scholars by using for reference human eye Marking area in vision attention feature extraction image.There are many useful cognitive features in the vision noticing mechanism of human eye.It is early Phase is inspired mostly in some low level cognitive features, such as contrast-response characteristic to the modeling of human eye vision attention mechanism (Contrast)。
Jeremy professor Wolfe of medical college, Harvard University once carried out research to the factor for influencing vision attention, showed The bigger vision attention of contrast between object is more easy to happen.Existing many scholars are based on the spies such as color, direction, brightness or texture Sign calculates the characteristics such as the contrast between image-region and pays attention to priori characteristic to model Low Level Vision.These low level priori characteristics Help to improve the performance of well-marked target detection.
In recent years, some scholars begin to focus on the high-level priori characteristic of vision attention.It is led in computer vision within nearly 2 years On the top-level meeting of domain, a part of scholar extracts image based on the visual representation of depth convolutional neural networks (DCNN) construction layering In high-level priori features.High-level priori features can be described in image well like this height of physical property (Objectness) Layer vision attention cognitive features.These research work show can based on high-level priori features obtained by depth e-learning Effectively improve the performance of vision attention modeling.But the high-level priori characteristic based on depth network also has certain limitation, For example the position of object can not be accurately positioned.The reason is that including multiple convolutional layers and pond layer in depth network, so that target Marginal information be blurred.And the low level priori characteristic of manual extraction can model the marginal information of target well, it is right High-level priori characteristic based on depth network has preferable supplementary function.
Summary of the invention
It is provided the technical problem to be solved by the present invention is to overcome the deficiencies in the prior art a kind of based on Feedforward Neural Networks Network merges the salient region detecting method of vision attention priori, and the present invention is by a variety of low level priori characteristics and is based on depth network High-level priori characteristic be combined, and merge these priori characteristics using multilayer feedforward neural network, come in detection image Marking area.
The present invention uses following technical scheme to solve above-mentioned technical problem:
A kind of marking area detection side based on feedforward neural network fusion vision attention priori proposed according to the present invention Method, comprising the following steps:
Step S1, input picture is divided into multiple mutually disjoint super-pixel;
It step S2, is each super based on different context areas according to the rarity of the low level priori characteristic of vision Pixel calculates rarity characteristic value;
Step S3, according to the contrast-response characteristic of the low level priori characteristic of vision, itself and difference are calculated for each super-pixel Contrast metric value between net region;
Step S4, according to the center biasing characteristic of the low level priori characteristic of vision, itself and figure are calculated for each super-pixel Space length characteristic value between inconocenter point, space length characteristic value include horizontal distance, vertical range and comprehensive distance;
Step S5, the high-level priori characteristic of each super-pixel is modeled based on existing depth network model, is obtained and is corresponded to High-level priori features value;
Step S6, low level priori characteristic and high-level priori characteristic are merged using multilayer feedforward neural network, that is, merged Rarity characteristic value, contrast metric value, space length characteristic value and the high-level priori features value obtained in step S2-S5, A possibility that each super-pixel belongs to significant class is calculated, to acquire final notable figure.
As a kind of marking area detection side based on feedforward neural network fusion vision attention priori of the present invention Method advanced optimizes scheme, in step S1, is based on existing SLIC algorithm, and it is the mutual of n that the input picture of w*h, which is divided into number, Disjoint super-pixel, w are width, and h is height;The spatial position feature F of each super-pixelSIt is defined as each pixel in the super-pixel The average value of spatial position feature, the color characteristic F of each super-pixelCIt is defined as each pixel color characteristic in the super-pixel Average value.
As a kind of marking area detection side based on feedforward neural network fusion vision attention priori of the present invention Method advanced optimizes scheme, is that i-th of super-pixel calculates it in different zones in image based on context area in step S2 Rarity characteristic value;
R (i)=- log (p (i))
Wherein, p (i) is the frequency that i-th of super-pixel feature occurs, and R (i) is the rarity characteristic value of i-th of super-pixel.
As a kind of marking area detection side based on feedforward neural network fusion vision attention priori of the present invention Method advanced optimizes scheme, in step S2, calculates the frequency that i-th of super-pixel feature occurs based on CIELab color characteristic, from And obtain the rarity characteristic value of each super-pixel.
As a kind of marking area detection side based on feedforward neural network fusion vision attention priori of the present invention Method advanced optimizes scheme, and in step S2, the context area of width a*w high a*h is arranged centered on the position of i-th of super-pixel Domain;Wherein, a is that context area field width and height account for the wide and high ratio of original image.
As a kind of marking area detection side based on feedforward neural network fusion vision attention priori of the present invention Method advanced optimizes scheme, in step S3, original image is divided into multiple grids, wherein the color characteristic and space bit of each grid Set the color characteristic and spatial position feature that characterizing definition is the super-pixel nearest from the mesh space;Based on color characteristic distance The contrast metric value D (i, x) of i-th super-pixel and x-th of grid is calculated with space phase recency;
Wherein, Fi cWithIt is the color characteristic of i-th of super-pixel and x-th of grid respectively,What is calculated is the The distance between i super-pixel and x-th of mesh color feature;Fi sWithIt is i-th of super-pixel and x-th of grid respectively Spatial position feature,What is calculated is the space phase recency of i-th of super-pixel and x-th of grid;σ is control The constant that space length processed influences space phase recency.
As a kind of marking area detection side based on feedforward neural network fusion vision attention priori of the present invention Method advanced optimizes scheme, and in step S4, horizontal distance is between super-pixel and image center on image level direction Spatial position distance, spatial position distance of the vertical range between super-pixel and image center in image vertical direction, Comprehensive distance is between super-pixel and image center in whole spatial position distance.
As a kind of marking area detection side based on feedforward neural network fusion vision attention priori of the present invention Method advanced optimizes scheme, in step S5, is based on the existing high-level priori features value of 16 layers of VGG model learning, 16 layers of VGG mould Type includes 5 groups of convolutional layers, 5 pond layers and 3 full articulamentums in total;Extract the last one convolutional layer i.e. Conv5_ of the model High-level priori features value of the 3 layers of resulting characteristic pattern as image;Each characteristic pattern is adjusted to input picture size, and will The high-level priori features value of each super-pixel is defined as the average value of each high-level priori features value of pixel inside super-pixel.
As a kind of marking area detection side based on feedforward neural network fusion vision attention priori of the present invention Method advanced optimizes scheme, in step S6, uses feedforward neural network fusion low level priori features and height with 3 hidden layers Level priori features calculate a possibility that each super-pixel belongs to significant class, to acquire final notable figure;The feed forward neural The nodal point number that 3 hidden layers are included in network is respectively 1000,500 and 200.
The invention adopts the above technical scheme compared with prior art, has following technical effect that
The present invention marking area detection under current complicated natural scene there are aiming at the problem that, by extracting and learning vision The low level priori characteristic and high-level priori characteristic paid attention to, it is intended to which overcome causes verification and measurement ratio is low to lack because image background is mixed and disorderly It falls into, to inhibit the background area in complex scene and obtain the image-region of arresting;The present invention will be based on before multilayer The salient region detecting method for presenting neural network fusion vision attention priori is applied in the marking area test problems in image, The S-measure value of marking area detection can be effectively improved.
Detailed description of the invention
Fig. 1 is overall flow schematic diagram of the invention.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with the accompanying drawings and the specific embodiments The present invention will be described in detail.
As shown in Figure 1, the marking area inspection based on multilayer feedforward neural network fusion vision attention priori of the present embodiment Survey method, successively comprises the steps of:
S1: input picture is divided into multiple mutually disjoint super-pixel;Based on existing Simple Linear The input picture of w*h size is excessively cut into n mutually disjoint super pictures by Iterative Clustering (SLIC) algorithm Element, w are width, and h is height;N is set as 300 in the present invention;The spatial position feature F of each super-pixelSIt is defined as the super-pixel In each pixel spatial position feature average value, the color characteristic F of each super-pixelCIt is defined as each pixel in the super-pixel The average value of color characteristic.
S2: being each super-pixel based on different context areas according to the rarity of the low level priori characteristic of vision Calculate rarity characteristic value;The rarity attention for showing people, which can be attracted by rare things and ignore automatically, to be frequently seen Things.It is that i-th of super-pixel calculates its rarity characteristic value in different zones in image the present invention is based on context area.
R (i)=- log (p (i))
Wherein, p (i) is the frequency that i-th of super-pixel feature occurs, and R (i) is the rarity characteristic value of i-th of super-pixel. According to above-mentioned formula, the frequency p (i) that i-th of super-pixel feature occurs is higher, and the rarity characteristic value R (i) of the super-pixel is more It is low.The frequency that i-th of super-pixel feature occurs is calculated based on CIELab color characteristic, to obtain the rarity spy of each super-pixel Value indicative.The context area of width a*w high a*h is set centered on the position of i-th of super-pixel;A takes different values that will obtain not With the context area of size, a takes 0.3,0.5 and 13 different size of regions of acquisition respectively.
S3: according to the contrast-response characteristic of the low level priori characteristic of vision, itself and different grids are calculated for each super-pixel Contrast metric value between region;Contrast-response characteristic shows that the attention of people is easy to be different from the things on periphery and is attracted. Original image is divided into 20 rows and arranged multiplied by 20 by this project, totally 400 grids;The color characteristic and spatial position characterizing definition of each grid For the color characteristic and spatial position feature of the super-pixel nearest from the mesh space;Then, color characteristic distance and sky are based on Between phase recency calculate i-th super-pixel and x-th of grid contrast metric value D (i, x);
Wherein, Fi cWithIt is the color characteristic of i-th of super-pixel and x-th of grid respectively,What is calculated is the The distance between i super-pixel and x-th of mesh color feature;Fi sWithIt is i-th of super-pixel and x-th of grid respectively Spatial position feature,What is calculated is the space phase recency of i-th of super-pixel and x-th of grid;σ is control The constant that space length processed influences space phase recency, is set as 0.2.
S4: according to the center biasing characteristic of the low level priori characteristic of vision, for each super-pixel calculate its in image Space length characteristic value between heart point, space length characteristic value include horizontal distance, vertical range and comprehensive distance.Level away from From with a distance from the spatial position on image level direction, vertical range is super-pixel and figure between super-pixel and image center Spatial position distance between inconocenter point in image vertical direction, comprehensive distance between super-pixel and image center Whole spatial position distance.
S5: the high-level priori characteristic of each super-pixel is modeled based on existing depth network model, obtains corresponding height Level priori features value;Based on the existing high-level priori features value of 16 layers of VGG model learning.16 layers of VGG model include 5 in total Group convolutional layer, 5 pond layers and 3 full articulamentums.The present invention extracts the last one convolutional layer i.e. " Conv5_3 " layer of the model High-level priori features value of the resulting characteristic pattern (altogether including 512 characteristic patterns) as image;Each characteristic pattern is adjusted to Input picture size (w*h), and the high-level priori features value of each super-pixel is defined as each pixel high level inside super-pixel The average value of secondary priori features value.
S6: it using the feedforward neural network fusion low level feature with 3 hidden layers and high-level feature, calculates each super Pixel belongs to a possibility that significant class, to acquire final notable figure.The knot that 3 hidden layers are included in the feedforward neural network Points are respectively 1000,500 and 200.
For the validity for verifying salient region detecting method provided by the invention, below by this method and 9 kinds of well-marked targets Detection method compares on SOD and ECSSD database.This 9 kinds of well-marked target detection methods are respectively: SF method, AM Method, G/R method, CL method, GP method, RRWR method, PM method, MST method and GF method.Method of the invention is referred to as Ours。
This method is as shown in Table 1 and Table 2 compared with the S-measure index performance of other methods.SOD and ECSSD data Library is all the well-marked target Test database comprising complicated image.Compared according to the performance in table it can be found that the present invention is abundant Low level and high-level vision attention priori are effectively integrated using multilayer feedforward neural network, is more advantageous in complicated natural scene Well-marked target is detected in image.
Performance of more than a kind of well-marked target detection method of table on SOD database compares
Method name SF AM GR CL GP
S-measure 0.420 0.606 0.586 0.563 0.620
Method name RRWR PM MST GF Ours
S-measure 0.588 0.614 0.609 0.618 0.723
Performance of more than the 2 kinds of well-marked target detection methods of table on ECSSD database compares
Method name SF AM GR CL GP
S-measure 0.451 0.639 0.643 0.628 0.658
Method name RRWR PM MST GF Ours
S-measure 0.645 0.667 0.648 0.661 0.752
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any In the technical scope disclosed by the present invention, any changes or substitutions that can be easily thought of by those familiar with the art, all answers It is included within the scope of protection of the present invention.

Claims (9)

1. a kind of salient region detecting method based on feedforward neural network fusion vision attention priori, which is characterized in that including Following steps:
Step S1, input picture is divided into multiple mutually disjoint super-pixel;
It step S2, is each super-pixel based on different context areas according to the rarity of the low level priori characteristic of vision Calculate rarity characteristic value;
Step S3, according to the contrast-response characteristic of the low level priori characteristic of vision, itself and different grids are calculated for each super-pixel Contrast metric value between region;
Step S4, according to the center biasing characteristic of the low level priori characteristic of vision, for each super-pixel calculate its in image Space length characteristic value between heart point, space length characteristic value include horizontal distance, vertical range and comprehensive distance;
Step S5, the high-level priori characteristic that each super-pixel is modeled based on existing depth network model, obtains corresponding height Level priori features value;
Step S6, low level priori characteristic and high-level priori characteristic, i.e. fusion steps are merged using multilayer feedforward neural network Rarity characteristic value, contrast metric value, space length characteristic value and the high-level priori features value obtained in S2-S5 calculates A possibility that each super-pixel belongs to significant class, to acquire final notable figure.
2. a kind of marking area detection side based on feedforward neural network fusion vision attention priori according to claim 1 Method, which is characterized in that in step S1, be based on existing SLIC algorithm, by the input picture of w*h be divided into number be n mutually not The super-pixel of intersection, w are width, and h is height;The spatial position feature F of each super-pixelSIt is empty to be defined as each pixel in the super-pixel Between position feature average value, the color characteristic F of each super-pixelCIt is defined as the flat of each pixel color characteristic in the super-pixel Mean value.
3. a kind of marking area detection side based on feedforward neural network fusion vision attention priori according to claim 1 Method, which is characterized in that be that i-th of super-pixel calculates it in different zones in image based on context area in step S2 Rarity characteristic value;
R (i)=- log (p (i))
Wherein, p (i) is the frequency that i-th of super-pixel feature occurs, and R (i) is the rarity characteristic value of i-th of super-pixel.
4. a kind of marking area detection side based on feedforward neural network fusion vision attention priori according to claim 1 Method, which is characterized in that in step S2, the frequency that i-th of super-pixel feature occurs is calculated based on CIELab color characteristic, to obtain Obtain the rarity characteristic value of each super-pixel.
5. a kind of marking area detection side based on feedforward neural network fusion vision attention priori according to claim 1 Method, which is characterized in that in step S2, the context area of width a*w high a*h is set centered on the position of i-th of super-pixel;Its In, a is that context area field width and height account for the wide and high ratio of original image.
6. a kind of marking area detection side based on feedforward neural network fusion vision attention priori according to claim 1 Method, which is characterized in that in step S3, original image is divided into multiple grids, wherein the color characteristic of each grid and spatial position Characterizing definition is the color characteristic and spatial position feature of the super-pixel nearest from the mesh space;Based on color characteristic distance and Space phase recency calculates the contrast metric value D (i, x) of i-th super-pixel and x-th of grid;
Wherein, Fi cWithIt is the color characteristic of i-th of super-pixel and x-th of grid respectively,What is calculated is i-th The distance between super-pixel and x-th of mesh color feature;Fi sWithIt is the space of i-th of super-pixel and x-th of grid respectively Position feature,What is calculated is the space phase recency of i-th of super-pixel and x-th of grid;σ is that control is empty Between a distance constant that space phase recency is influenced.
7. a kind of marking area detection side based on feedforward neural network fusion vision attention priori according to claim 1 Method, which is characterized in that in step S4, space of the horizontal distance between super-pixel and image center on image level direction Positional distance, spatial position distance of the vertical range between super-pixel and image center in image vertical direction are comprehensive Distance is between super-pixel and image center in whole spatial position distance.
8. a kind of marking area detection side based on feedforward neural network fusion vision attention priori according to claim 1 Method, which is characterized in that in step S5, be based on the existing high-level priori features value of 16 layers of VGG model learning, 16 layers of VGG model It in total include 5 groups of convolutional layers, 5 pond layers and 3 full articulamentums;Extract the last one convolutional layer i.e. Conv5_3 of the model High-level priori features value of the resulting characteristic pattern of layer as image;Each characteristic pattern is adjusted to input picture size, and will The high-level priori features value of each super-pixel is defined as the average value of each high-level priori features value of pixel inside super-pixel.
9. a kind of marking area detection side based on feedforward neural network fusion vision attention priori according to claim 1 Method, which is characterized in that in step S6, use feedforward neural network fusion low level priori features and high level with 3 hidden layers Secondary priori features calculate a possibility that each super-pixel belongs to significant class, to acquire final notable figure;The Feedforward Neural Networks The nodal point number that 3 hidden layers are included in network is respectively 1000,500 and 200.
CN201811423906.9A 2018-11-27 2018-11-27 Salient region detecting method based on feedforward neural network fusion vision attention priori Pending CN109583450A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811423906.9A CN109583450A (en) 2018-11-27 2018-11-27 Salient region detecting method based on feedforward neural network fusion vision attention priori

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811423906.9A CN109583450A (en) 2018-11-27 2018-11-27 Salient region detecting method based on feedforward neural network fusion vision attention priori

Publications (1)

Publication Number Publication Date
CN109583450A true CN109583450A (en) 2019-04-05

Family

ID=65924845

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811423906.9A Pending CN109583450A (en) 2018-11-27 2018-11-27 Salient region detecting method based on feedforward neural network fusion vision attention priori

Country Status (1)

Country Link
CN (1) CN109583450A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427980A (en) * 2019-07-11 2019-11-08 东南大学 Merge the obvious object existence determination method of low layer and high-level characteristic
CN111815610A (en) * 2020-07-13 2020-10-23 广东工业大学 Lesion focus detection method and device of lesion image
CN113298748A (en) * 2020-02-21 2021-08-24 安徽大学 Image collaborative salient object detection model based on attention mechanism

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778776A (en) * 2016-11-30 2017-05-31 武汉大学深圳研究院 A kind of time-space domain significance detection method based on location-prior information
CN107833220A (en) * 2017-11-28 2018-03-23 河海大学常州校区 Fabric defect detection method based on depth convolutional neural networks and vision significance

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106778776A (en) * 2016-11-30 2017-05-31 武汉大学深圳研究院 A kind of time-space domain significance detection method based on location-prior information
CN107833220A (en) * 2017-11-28 2018-03-23 河海大学常州校区 Fabric defect detection method based on depth convolutional neural networks and vision significance

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GAYOUNG LEE 等: "ELD-Net: An Efficient Deep Learning Architecture for Accurate Saliency Detection", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
汪成 等: "基于SLIC超像素分割显著区域检测方法的研究", 《南京邮电大学学报(自然科学版)》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110427980A (en) * 2019-07-11 2019-11-08 东南大学 Merge the obvious object existence determination method of low layer and high-level characteristic
CN110427980B (en) * 2019-07-11 2022-06-03 东南大学 Method for judging existence of salient object by fusing low-level and high-level features
CN113298748A (en) * 2020-02-21 2021-08-24 安徽大学 Image collaborative salient object detection model based on attention mechanism
CN113298748B (en) * 2020-02-21 2022-11-18 安徽大学 Image collaborative salient object detection model based on attention mechanism
CN111815610A (en) * 2020-07-13 2020-10-23 广东工业大学 Lesion focus detection method and device of lesion image
CN111815610B (en) * 2020-07-13 2023-09-12 广东工业大学 Lesion detection method and device for lesion image

Similar Documents

Publication Publication Date Title
CN107093205B (en) A kind of three-dimensional space building window detection method for reconstructing based on unmanned plane image
CN102136142B (en) Nonrigid medical image registration method based on self-adapting triangular meshes
CN103793708B (en) A kind of multiple dimensioned car plate precise positioning method based on motion correction
CN109145713A (en) A kind of Small object semantic segmentation method of combining target detection
CN107679503A (en) A kind of crowd's counting algorithm based on deep learning
CN109583450A (en) Salient region detecting method based on feedforward neural network fusion vision attention priori
CN106326937A (en) Convolutional neural network based crowd density distribution estimation method
CN107437092A (en) The sorting algorithm of retina OCT image based on Three dimensional convolution neutral net
CN105957063A (en) CT image liver segmentation method and system based on multi-scale weighting similarity measure
CN107133955A (en) A kind of collaboration conspicuousness detection method combined at many levels
CN106570874B (en) Image marking method combining image local constraint and object global constraint
CN109753997A (en) A kind of liver neoplasm automatic and accurate Robust Segmentation method in CT image
CN108537782A (en) A method of building images match based on contours extract with merge
CN108564120A (en) Feature Points Extraction based on deep neural network
CN103793930A (en) Pencil drawing image generation method and device
CN108460833A (en) A kind of information platform building traditional architecture digital protection and reparation based on BIM
CN108664994A (en) A kind of remote sensing image processing model construction system and method
CN108765409A (en) A kind of screening technique of the candidate nodule based on CT images
CN106709883A (en) Point cloud denoising method based on joint bilateral filtering and sharp feature skeleton extraction
CN107123130A (en) Kernel correlation filtering target tracking method based on superpixel and hybrid hash
CN107392244A (en) The image aesthetic feeling Enhancement Method returned based on deep neural network with cascade
CN106504239B (en) A kind of method of liver area in extraction ultrasound image
CN106295657A (en) A kind of method extracting human height's feature during video data structure
CN105303546B (en) Neighbour's propagation clustering image partition method based on fuzzy connectedness
CN107944354A (en) A kind of vehicle checking method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190405