CN114022747A - Salient object extraction method based on feature perception - Google Patents
Salient object extraction method based on feature perception Download PDFInfo
- Publication number
- CN114022747A CN114022747A CN202210015109.7A CN202210015109A CN114022747A CN 114022747 A CN114022747 A CN 114022747A CN 202210015109 A CN202210015109 A CN 202210015109A CN 114022747 A CN114022747 A CN 114022747A
- Authority
- CN
- China
- Prior art keywords
- feature
- characteristic
- channel
- matrix
- feature matrix
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention belongs to the technical field of image processing, and particularly relates to a significant target extraction method based on feature perception. Processing the original picture through the characteristic MSCN factor, the characteristic image entropy, the characteristic dark channel, the H channel of the characteristic HSV channel and the S channel of the characteristic HSV channel to obtain 5 characteristic layers and a characteristic matrix(ii) a Feature matrixCarrying out down-sampling by 4 times to obtain a down-sampling feature matrix(ii) a Down-sampling feature matrixNormalization processing is carried out to obtain a normalized down-sampling feature matrix(ii) a To the normalized down-sampling feature matrixExtracting the single-feature significant target to obtain a feature matrix(ii) a Carrying out weight fusion on the single-feature significant target to obtain a feature matrixFeature matrixThe position of the original image is a first obvious target; the method and the system fuse multi-source complementary information of different levels such as features, decisions and the like, and improve the accuracy of extracting the obvious target.
Description
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a significant target extraction method based on feature perception.
Background
After the wide and close attention of the machine vision discipline in the last 60 s, the technical staff is consistently dedicated to the project topic of researching the self-adaptive ability of the computer to the external environment like human. In order to enable a computer to perform high-level semantic analysis understanding, an object extraction link in an image needs to be solved first. Because object extraction becomes a fundamental problem of computer vision, great progress is made in the research result of the computable model based on the significance-based visual attention.
How to effectively capture salient objects of an image. The saliency calculation according to the calculated object may be divided into a saliency calculation based on a gaze point and a saliency region calculation. The salient region obtained by the salient calculation based on the gazing point is a small number of human eye attention points in the image, and further the salient object is a key problem to be solved in a salient object extraction algorithm adopting a salient map based on the gazing point. The salient region calculation can highlight the salient region in the image, so that the extraction efficiency of the salient object is greatly improved. However, the salient region is not considered with sufficient salient attributes, and is easy to generate highlight errors, and the effect of salient object extraction is greatly reduced if the highlight errors occur. Therefore, how to acquire a valid salient region is a key problem to be solved in a salient object extraction algorithm that adopts a salient map calculated based on the salient region.
Disclosure of Invention
In order to obtain an effective salient region, the invention provides a salient object extraction method based on feature perception, different features are used for fusion, different weights are given to different features, and the fusion is carried out from multi-source complementary information of different levels such as features, decisions and the like, so that the extraction accuracy of the detected salient object is improved.
The invention is realized by the following technical scheme:
the invention provides a significant target extraction method based on feature perception, which comprises the following steps:
s1, processing the original picture through the characteristic MSCN factor, the characteristic image entropy, the characteristic dark channel, the H channel of the characteristic HSV channel and the S channel of the characteristic HSV channel to obtain 5 characteristic layers and obtain a characteristic matrix(ii) a In the processing process, after an original picture is imported, 5 feature layers with the same size as the original picture are generated, wherein MSCN factors represent texture features, and a feature matrix is obtained by the feature layersThe image entropy expresses detail features, and the feature layer obtains a feature matrixThe dark channel feature layer obtains a feature matrixThe dark channel can act when the color of the foreground and the background of the original picture are similar, the H channel of the HSV channel represents the color degree characteristic, and the characteristic layer obtains a characteristic matrixThe S channel of the HSV channel represents the color tone characteristic, and the characteristic layer obtains a characteristic matrix;
S3: down-sampling feature matrixNormalization processing is carried out to obtain a normalized down-sampling feature matrix;
S4, the normalized down-sampling feature matrixExtracting the single-feature significant target to obtain a feature matrix;
S5: carrying out weight fusion on the single-feature significant target to obtain a feature matrixFeature matrixThe position of the original image is a first obvious target;
wherein:=1, 2, 3, 4, 5,1 denotes the characteristic MSCN factor, 2 denotes the characteristic image entropy, 3 denotes the characteristic dark channel, 4 denotes the H channel of the characteristic HSV channel, 5 denotes the S channel of the characteristic HSV channel.
wherein:the 4X4 block in the feature layer corresponds to the coordinates of the down-sampled point,representing the coordinates of a point in a 4X4 block in the downsampled feature layer, p being the number of rows in the 4X4 block, q being the number of columns in the 4X4 block,representing coordinatesThe characteristic value of the corresponding point is calculated,includes all that。
The feature matrix can be reduced by downsampling, and the function of improving the operation speed is achieved. The method is simple in obtaining mode and high in operation efficiency.
wherein:to representThe minimum value of the overall feature matrix is,to representThe maximum value of the overall feature matrix is,includes all that。
Normalization has the function of eliminating the influence of the abnormal value and improves the accuracy of feature extraction.
(1) feature MSCN factor and feature image entropy single-feature salient object extraction
Setting a threshold value to extract a single-feature significant target:when is coming into contact withWhen it is, thenThe corresponding position of the point in the feature layer is a single-feature salient target; the value of the single-feature significant target is recorded as 1, and the value of the non-single-feature significant target is recorded as 0;
(2) single-feature significant target extraction of H channel and S channel of feature dark channel and feature HSV channel
Extracting the significant target by adopting a statistical histogram: for normalized down-sampling feature matrixPerforming histogram statistics to obtain histogram distribution as follows:(ii) a For normalized down-sampling feature matrixMaking histogram statistics on the peripheral edge to obtain the histogram distribution as follows:(ii) a Extracting the single-feature significant target according to the following formula:
representing a whole/four-week processing of the histogram, the selected value being satisfied in the feature layerOrIs/are as followsThe position corresponding to the point is a single-feature salient object, and the value of the single-feature salient object is recorded asThe value of the non-single-feature salient object is recorded as 0;indicating that a value not present in the local histogram appears in the global statistical histogram,a segment of values representing an abnormal increase in the overall statistical histogram;
wherein:,a threshold value is indicated which is indicative of,are the coordinates of the down-sampled matrix and,=1、2、3、、、50,a representation value segment;
Further, T1=0.4。
wherein:to representIs determined by the average value of (a) of (b),to representStandard deviation of (2).
wherein the content of the first and second substances,the weight is represented by a weight that is,includes all that;
Characteristic MSCN factor weight=1 weight of feature image entropy=1, weight of characteristic dark channel is=1.5, weight of H channel of characteristic HSV channel=1.5, S channel weight of characteristic HSV channel= 1;
when in useWhen it is, thenThe position of the salient object is a salient object I on the original picture;
different features are used for fusion, different weights are given to the different features, and fusion is performed from multi-source complementary information of different levels such as features and decisions, so that the detection accuracy is improved.
Further, the first salient target and the pixel block obtained by the super-pixel segmentation are fused to obtain a second salient target.
when in useWhen the value of the super-pixel is more than 40%, the total ratio of the pixel point of the first significant target in the single pixel block obtained by super-pixel segmentation to the pixel point in the single pixel block exceeds 40%, and the super-pixel block is a second significant target;
wherein the content of the first and second substances,a single block of pixels representing a super-pixel partition,representing the total number of pixels of the single pixel block,representing the total number of pixels in the single pixel block that contain the salient object one.
By adopting the technical scheme, the invention has the following advantages:
1. the method and the device have the advantages that different features are fused, different weights are given to the different features, and the accuracy of extracting the obvious target is improved by fusing multi-source complementary information of different levels such as features and decisions.
2. The method does not depend on single feature for feature extraction, depends on a plurality of features to jointly determine feature extraction, and has wide application range and high accuracy in the extraction of the significant target area.
3. The method can effectively extract the significant target under the condition that the color of the significant target area is similar to that of the background.
4. The invention has low operation complexity and can obtain results with less resources.
5. The invention utilizes the characteristic information to simulate the obvious target perceived by human eyes; meanwhile, the position information is utilized to simulate the obvious target perceived by human eyes; the characteristic information and the position information are combined to jointly determine a significant target, so that the method can adapt to more picture conditions; and has better accuracy and robustness.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments of the present invention or the prior art will be briefly described below, and it is obvious that the drawings described below are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is an original picture;
FIG. 2 is a graph of a downsampled matrix;
FIG. 3 is a single-feature significant feature diagram obtained by performing single-feature extraction using a feature MSCN factor as an example;
FIG. 4 is an overall statistical histogram of an embodiment;
FIG. 5 is a statistical histogram of the peripheral edges in the example;
FIG. 6 is a graph showing the increase in the overall ratio to the peripheral edge in the example;
FIG. 7 is a single-feature saliency map taken for example for the S channel of a feature HSV channel;
FIG. 8 is a salient object map after fusion by weight;
FIG. 9 is a super-pixel segmented picture of an original picture;
FIG. 10 is a map of the salient objects after fusion;
fig. 11 is a diagram of a downsampling process.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments.
The embodiment provides a salient object extraction method based on feature perception, which comprises the following steps:
s1, processing the original picture through the characteristic MSCN factor, the characteristic image entropy, the characteristic dark channel, the H channel of the characteristic HSV channel and the S channel of the characteristic HSV channel to obtain 5 characteristic layers and obtain a characteristic matrix(ii) a In the processing process, as shown in fig. 1, after the original picture is imported, 5 feature layers with the same size as the original picture are generated, wherein the MSCN factor represents the texture feature, and the feature layers obtain a feature matrixThe image entropy expresses detail features, and the feature layer obtains a feature matrixThe dark channel feature layer obtains a feature matrixThe dark channel can act when the color of the foreground and the background of the original picture are similar, the H channel of the HSV channel represents the color degree characteristic, and the characteristic layer obtains a characteristic matrixThe S channel of the HSV channel represents the color tone characteristic, and the characteristic layer obtains a characteristic matrix(ii) a Feature matrixIncluding all positionsCharacteristic values of (1), in the same way、、、Also includes all positionsThe characteristic value of (2).
S2: respectively combining the feature matrices、、、、Carrying out 4-time down-sampling to obtain a down-sampling feature matrix、、、、;
as shown in fig. 2, is a feature matrixAs shown in fig. 11, first, an original picture with a size of M × N is input to obtain an MSCN feature matrix with a size of M × NThen is aligned withPerforming a 4-fold downsampling to obtain a downsampled matrix of size M1N 1Wherein,。
The detailed process of down-sampling is to divide the feature matrix with the size of M × N into non-repetitive 4X4 blocks, and then represent the whole block by the mean value of the blocks, so as to obtain a down-sampling feature matrix with the size of M1 × N1. As shown in fig. 2, is any 4X4 blockThe obtaining method is as follows:
The feature matrix can be reduced by downsampling, and the function of improving the operation speed is achieved. The method is simple in obtaining mode and high in operation efficiency.
S3: respectively down-sampling feature matrices、、、、Normalization processing is carried out to obtain a normalized down-sampling feature matrix、、、、;
wherein:to representThe minimum value of (a) is determined,to representThe maximum value of (a) is,is the coordinates of the downsampled matrix;
Normalization has the function of eliminating the influence of the abnormal value and improves the accuracy of feature extraction.
S4, the normalized down-sampling feature matrix、、、、Extracting the single-feature significant target to obtain a feature matrix、、、、;
(1) Extracting the single-feature significant target of the feature MSCN factor to obtain a feature matrix
As shown in fig. 3, the single-feature significant feature is obtained by performing single-feature extraction with the feature MSCN factor as an example;
setting a threshold value to extract a single-feature significant target:when is coming into contact withWhen, T1=0.4, thenThe corresponding position of the point in the feature layer is a single-feature salient target; the value of the single-feature significant target is recorded as 1, the rest of the single-feature significant targets are non-single-feature significant targets, and the value of the non-single-feature significant target is recorded as 0;
the feature matrix for extracting the single-feature salient object of the feature image entropy can be obtained by the same method including all the values marked as 0 and 1;
(2) Single-feature significant target extraction of H channel and S channel of feature dark channel and feature HSV channel
Single feature saliency as shown in FIG. 7 is exemplified by the S channel of a feature HSV channel
Features, FIG. 4 is a feature matrixValue of (A)Histogram of ensemble statisticsAs shown in fig. 5, the feature matrixValue of (A)Histogram of the peripheral edge statistics(ii) a The value statistics are counted in value segments, because the normalized values range between 0 and 1]Inner, so according to the value segment [0,0.02]、[0.02、0.04]、[0.04、0.06]、[0.06、0.08]、[0.08、0.1]Make statistics, i.e.Representative value segment [0,0.02],Representative value segments [0.02, 0.04 ]],Representative value segment [0.04, 0.06 ]](ii) a Specifying values at the endpoints to be uniformly sorted into upper or lower values, e.g. 0.02 into value segments [0,0.02]Put 0.04 into the value section [0.02, 0.04 ]]The 0.06 is classified into value segments [0.04, 0.06 ]]Put 0.08 into the value section [0.08, 0.1 ]](ii) a Fig. 6 is a growth multiple obtained by performing the whole/peripheral edge processing on each value segment, that is, the number of values of each value segment in the whole statistical histogram is a multiple of the number of values of each value segment in the peripheral edge statistical histogram.
Extracting the S channel significant target of the single-feature HSV channel according to the following formula:
wherein:;to representIs determined by the average value of (a) of (b),to representStandard deviation of (2).
Representing a whole/four-week processing of the histogram, the selected value being satisfied in the feature layerOrIs/are as followsThe position corresponding to the point is a single-feature salient object,indicating that a value not present in the local histogram appears in the global statistical histogram,values representing anomalous increases in global statistical histogramsA segment; the value of the single-feature salient object is recorded asThe value of the non-single feature salient object is noted as 0.
S5: carrying out weight fusion on the single-feature significant target to obtain a feature matrixFeature matrixThe position of the original image is a first obvious target; feature matrixThe calculation formula is as follows:
the characteristic MSCN factor is weighted by 1, i.e.=1, weight of feature image entropy is 1, i.e.=1, the weight of the characteristic dark channel is 1.5, i.e.=1.5, the weight of the H channel of the characteristic HSV channel is 1.5,=1.5, feature HThe S channel weight of the SV channel is 1, i.e.=1.5。
when in useWhen it is, thenThe position of the salient object is a salient object I on the original picture; fig. 8 is obtained by performing weight fusion on 5 features in this embodiment.
Different features are used for fusion, different weights are given to the different features, and fusion is performed from multi-source complementary information of different levels such as features and decisions, so that the detection accuracy is improved.
Further, fusing the salient object one shown in fig. 8 with the pixel block obtained by the super-pixel division shown in fig. 9 results in the salient object two shown in fig. 10.
when in useIs large in valueAnd when the sum of the pixel point of the first significant target in the single pixel block obtained by super-pixel segmentation and the pixel point in the single pixel block exceeds 40%, the single super-pixel block is the second significant target.
Wherein the content of the first and second substances,a single block of pixels representing a super-pixel partition,representing the total number of pixels of the single pixel block,representing the total number of pixel points of a first salient target contained in the single pixel block; note that a 4 × 4 block in the downsampled matrix represents a pixel, and as shown in fig. 9, the pixel block is obtained by superpixel division, and the pixel block calculates the pixel according to the same rule.
It should be noted that C in the present invention represents a matrix, i.e. a set of specific values,all represent specific numerical values; c comprises、、、、And the like,to representThe specific numerical values are, for example,Included、、、、and the like.
The first significant target and the second significant target are both significant targets extracted by a significant target extraction method based on feature perception, and the second significant target is obtained by fusing superpixel segmentation, so that the first significant target has higher accuracy than the first significant target.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principles of the present invention are intended to be included within the scope of the present invention.
Claims (9)
1. The method for extracting the salient object based on the feature perception is characterized by comprising the following steps of:
s1, processing the original picture through the characteristic MSCN factor, the characteristic image entropy, the characteristic dark channel, the H channel of the characteristic HSV channel and the S channel of the characteristic HSV channel to obtain 5 characteristic layers and obtain a characteristic matrix;
S3: down-sampling feature matrixNormalization processing is carried out to obtain a normalized down-sampling feature matrix;
S4, the normalized down-sampling feature matrixExtracting the single-feature significant target to obtain a feature matrix;
S5: carrying out weight fusion on the single-feature significant target to obtain a feature matrixFeature matrixThe position of the original image is a first obvious target;
2. Feature perception based saliency as claimed in claim 1The target extraction method is characterized by comprising the following steps: feature matrix in step S2The obtaining method is as follows:
wherein:the 4X4 block in the feature layer corresponds to the coordinates of the down-sampled point,representing the coordinates of a point in a 4X4 block in the downsampled feature layer, p being the number of rows in the 4X4 block, q being the number of columns in the 4X4 block,representing coordinatesThe characteristic value of the corresponding point is calculated,includes all that。
3. The feature perception-based salient object extraction method according to claim 2, wherein: feature matrix in step S3The obtaining method is as follows:
4. The feature perception-based salient object extraction method according to claim 3, wherein: further, the feature matrix in step S4The obtaining method is as follows:
(1) extracting a single-feature salient object by using a feature MSCN factor and a feature image entropy:
setting a threshold value to extract a single-feature significant target:when is coming into contact withWhen it is, thenThe corresponding position of the point in the feature layer is a single-feature salient target;
(2) extracting single-feature significant targets of a feature dark channel, an H channel and a feature S channel of the feature HSV channel:
extracting the significant target by adopting a statistical histogram: for normalized down-sampling feature matrixAnd carrying out integral statistics to obtain a histogram as follows:(ii) a For normalized down-sampling feature matrixMaking histogram statistics on the peripheral edge to obtain a histogram as follows:(ii) a Extracting the single-feature significant target according to the following formula:
5. The feature perception-based salient object extraction method according to claim 4, wherein: t is1=0.4。
7. The feature perception-based salient object extraction method according to claim 4, wherein: step S5 feature matrixThe obtaining method comprises the following steps:
when in useWhen it is, thenThe position of the salient object is a salient object I on the original picture;
8. The feature perception based salient object extraction method of any one of claims 1-7, wherein: and fusing the first significant target and the pixel block obtained by the super-pixel segmentation to obtain a second significant target.
9. Feature-based awareness as claimed in claim 8The significant object extraction method is characterized in that: the fusion method comprises the following steps:;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210015109.7A CN114022747B (en) | 2022-01-07 | 2022-01-07 | Salient object extraction method based on feature perception |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210015109.7A CN114022747B (en) | 2022-01-07 | 2022-01-07 | Salient object extraction method based on feature perception |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114022747A true CN114022747A (en) | 2022-02-08 |
CN114022747B CN114022747B (en) | 2022-03-15 |
Family
ID=80069837
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210015109.7A Active CN114022747B (en) | 2022-01-07 | 2022-01-07 | Salient object extraction method based on feature perception |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114022747B (en) |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101303733A (en) * | 2008-05-26 | 2008-11-12 | 东华大学 | Method for viewing natural color at night with sense of space adopting pattern database |
CN105678735A (en) * | 2015-10-13 | 2016-06-15 | 中国人民解放军陆军军官学院 | Target salience detection method for fog images |
CN105825238A (en) * | 2016-03-30 | 2016-08-03 | 江苏大学 | Visual saliency object detection method |
CN106127756A (en) * | 2016-06-21 | 2016-11-16 | 西安工程大学 | A kind of insulator recognition detection method based on multicharacteristic information integration technology |
CN107730472A (en) * | 2017-11-03 | 2018-02-23 | 昆明理工大学 | A kind of image defogging optimized algorithm based on dark primary priori |
CN109410171A (en) * | 2018-09-14 | 2019-03-01 | 安徽三联学院 | A kind of target conspicuousness detection method for rainy day image |
CN110008969A (en) * | 2019-04-15 | 2019-07-12 | 京东方科技集团股份有限公司 | The detection method and device in saliency region |
CN110675351A (en) * | 2019-09-30 | 2020-01-10 | 集美大学 | Marine image processing method based on global brightness adaptive equalization |
CN111091129A (en) * | 2019-12-24 | 2020-05-01 | 沈阳建筑大学 | Image salient region extraction method based on multi-color characteristic manifold sorting |
CN111310774A (en) * | 2020-04-01 | 2020-06-19 | 江苏商贸职业学院 | PM2.5 concentration measurement method based on image quality |
CN111915592A (en) * | 2020-08-04 | 2020-11-10 | 西安电子科技大学 | Remote sensing image cloud detection method based on deep learning |
-
2022
- 2022-01-07 CN CN202210015109.7A patent/CN114022747B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101303733A (en) * | 2008-05-26 | 2008-11-12 | 东华大学 | Method for viewing natural color at night with sense of space adopting pattern database |
CN105678735A (en) * | 2015-10-13 | 2016-06-15 | 中国人民解放军陆军军官学院 | Target salience detection method for fog images |
CN105825238A (en) * | 2016-03-30 | 2016-08-03 | 江苏大学 | Visual saliency object detection method |
CN106127756A (en) * | 2016-06-21 | 2016-11-16 | 西安工程大学 | A kind of insulator recognition detection method based on multicharacteristic information integration technology |
CN107730472A (en) * | 2017-11-03 | 2018-02-23 | 昆明理工大学 | A kind of image defogging optimized algorithm based on dark primary priori |
CN109410171A (en) * | 2018-09-14 | 2019-03-01 | 安徽三联学院 | A kind of target conspicuousness detection method for rainy day image |
CN110008969A (en) * | 2019-04-15 | 2019-07-12 | 京东方科技集团股份有限公司 | The detection method and device in saliency region |
CN110675351A (en) * | 2019-09-30 | 2020-01-10 | 集美大学 | Marine image processing method based on global brightness adaptive equalization |
CN111091129A (en) * | 2019-12-24 | 2020-05-01 | 沈阳建筑大学 | Image salient region extraction method based on multi-color characteristic manifold sorting |
CN111310774A (en) * | 2020-04-01 | 2020-06-19 | 江苏商贸职业学院 | PM2.5 concentration measurement method based on image quality |
CN111915592A (en) * | 2020-08-04 | 2020-11-10 | 西安电子科技大学 | Remote sensing image cloud detection method based on deep learning |
Non-Patent Citations (2)
Title |
---|
LING Z等: "Optimal transmission estimation via fog density perception for efficient single image defogging", 《IEEE TRANSACTIONS ON MULTIMEDIA》 * |
刘坤等: "融合深度信息的雾霾情况下显著性目标提取", 《河北工业大学学报》 * |
Also Published As
Publication number | Publication date |
---|---|
CN114022747B (en) | 2022-03-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107622258B (en) | Rapid pedestrian detection method combining static underlying characteristics and motion information | |
US20200410212A1 (en) | Fast side-face interference resistant face detection method | |
WO2021254205A1 (en) | Target detection method and apparatus | |
CN107133943A (en) | A kind of visible detection method of stockbridge damper defects detection | |
CN108921820B (en) | Saliency target detection method based on color features and clustering algorithm | |
CN111062278B (en) | Abnormal behavior identification method based on improved residual error network | |
CN111524145A (en) | Intelligent picture clipping method and system, computer equipment and storage medium | |
CN109544564A (en) | A kind of medical image segmentation method | |
CN112529090B (en) | Small target detection method based on improved YOLOv3 | |
CN111626342B (en) | Image sample processing method, device and storage medium | |
CN110852327A (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN108776823A (en) | Cervical carcinoma lesion analysis method based on cell image recognition | |
CN113850136A (en) | Yolov5 and BCNN-based vehicle orientation identification method and system | |
CN111062854A (en) | Method, device, terminal and storage medium for detecting watermark | |
CN114220126A (en) | Target detection system and acquisition method | |
CN111553337A (en) | Hyperspectral multi-target detection method based on improved anchor frame | |
CN112926667B (en) | Method and device for detecting saliency target of depth fusion edge and high-level feature | |
Zeeshan et al. | A newly developed ground truth dataset for visual saliency in videos | |
CN113870196A (en) | Image processing method, device, equipment and medium based on anchor point cutting graph | |
CN111597845A (en) | Two-dimensional code detection method, device and equipment and readable storage medium | |
CN114022747B (en) | Salient object extraction method based on feature perception | |
JP4967045B2 (en) | Background discriminating apparatus, method and program | |
CN107368847A (en) | A kind of crop leaf diseases recognition methods and system | |
CN115775226B (en) | Medical image classification method based on transducer | |
CN113705640B (en) | Method for quickly constructing airplane detection data set based on remote sensing image |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |