CN109165658B - Strong negative sample underwater target detection method based on fast-RCNN - Google Patents

Strong negative sample underwater target detection method based on fast-RCNN Download PDF

Info

Publication number
CN109165658B
CN109165658B CN201810986082.XA CN201810986082A CN109165658B CN 109165658 B CN109165658 B CN 109165658B CN 201810986082 A CN201810986082 A CN 201810986082A CN 109165658 B CN109165658 B CN 109165658B
Authority
CN
China
Prior art keywords
positive
negative
sample
samples
negative sample
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810986082.XA
Other languages
Chinese (zh)
Other versions
CN109165658A (en
Inventor
张盛平
吕晓倩
孙嘉敏
董开坤
朴学峰
孙鑫
张维刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology Weihai
Original Assignee
Harbin Institute of Technology Weihai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology Weihai filed Critical Harbin Institute of Technology Weihai
Priority to CN201810986082.XA priority Critical patent/CN109165658B/en
Publication of CN109165658A publication Critical patent/CN109165658A/en
Application granted granted Critical
Publication of CN109165658B publication Critical patent/CN109165658B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The invention discloses a method for detecting a strong negative sample underwater target based on fast-RCNN, which comprises the following steps: acquiring a target image data set, inputting the target image into a convolutional neural network for forward propagation to a shared convolutional layer to obtain a low-dimensional feature map; inputting one path of the obtained low-dimensional feature map into an RPN network to obtain a positive sample, a negative sample and a coordinate, and continuously transmitting the other path of the low-dimensional feature map forward to obtain a high-dimensional feature map; carrying out image averaging processing on the obtained negative samples which are not intersected with the Ground Truth to realize similarity comparison based on image brightness characteristics and complete screening of false negative samples which are similar to the positive samples; inputting the positive sample, the obtained screened negative sample and the high-dimensional characteristic graph into the ROI Align layer together, and extracting the characteristics of the positive and negative sample suggested regions; and transmitting the acquired characteristics of the suggested region into a full-connection layer, and outputting the classification score and the regressed coordinate value of the region.

Description

Strong negative sample underwater target detection method based on fast-RCNN
Technical Field
The invention relates to the technical field of image processing and pattern recognition, in particular to a strong negative sample underwater target detection method based on fast-RCNN.
Background
Object detection is an important research direction in the field of computer vision, and is applied in many practical scenarios, such as intelligent video surveillance, content-based image retrieval, automatic driving, pedestrian detection, etc. At present, a target detection method based on deep learning obtains a better result. From 2014, the R-CNN firstly applies deep learning to a target detection algorithm, a target detection method enters a rapid development stage, and the current target detection method based on the deep learning is divided into two main flow directions: namely a two-stage target detection method and a single-stage target detection method. In contrast, the two-stage target detection method is higher in precision and the single-stage target detection method is faster.
The underwater biological target detection has important significance in the fields of marine product fishing, marine resource protection and the like. Deep learning is used as a data-driven method, and considerable detection results can be generated by learning a large number of training samples. However, because it is difficult to collect a large number of underwater images, and the underwater images have the characteristics of blurred images, piled targets, many small objects, and the like, a certain difficulty is caused to label the data set, so that a large number of missing labels generally exist in the data set for underwater target detection, a large number of target areas may exist in a target detection negative sample, and the final result of target detection is affected.
Disclosure of Invention
The invention aims to provide a strong negative sample underwater target detection method based on fast-RCNN, which can improve the quality of a negative sample, so that a detection model can better learn a data set, and the detection precision of an underwater biological target is improved.
In order to achieve the purpose, the invention adopts the following technical scheme:
a strong negative sample underwater target detection method based on fast-RCNN comprises the following steps:
acquiring a target image data set, inputting the target image into a convolutional neural network for forward propagation to a shared convolutional layer to obtain a low-dimensional feature map;
inputting one path of the obtained low-dimensional feature map into an RPN network to obtain a positive sample, a negative sample and a coordinate, and continuously transmitting the other path of the low-dimensional feature map forward to obtain a high-dimensional feature map;
carrying out image averaging processing on the obtained negative samples which are not intersected with the Ground Truth to realize similarity comparison based on image brightness characteristics and complete screening of false negative samples which are similar to the positive samples;
inputting the positive sample, the obtained screened negative sample and the high-dimensional characteristic graph into the ROI Align layer together, and extracting the characteristics of the positive and negative sample suggested regions;
and transmitting the acquired characteristics of the suggested region into a full-connection layer, and outputting the classification score and the regressed coordinate value of the region.
Further, after the step of performing image averaging on the obtained negative samples which are not intersected with the group route to realize similarity comparison based on image brightness features and complete the step of screening out false negative samples similar to the positive samples, the method further includes:
and carrying out image LBP histogram calculation on the remaining negative samples, realizing similarity comparison based on image texture characteristics, and finishing screening out false negative samples similar to the positive samples.
Further, the performing an image LBP histogram operation on the remaining negative samples to realize similarity comparison based on image texture features and complete the screening of false negative samples similar to the positive samples includes:
calculating an LBP histogram of the target image, and connecting the LBP histogram into a feature vector as a texture similarity comparison template;
extracting LBP characteristics from the residual negative samples to obtain an LBP histogram, and connecting the obtained LBP histogram into a characteristic vector;
and calculating the Euclidean distance between the feature vector of the negative sample and the feature vector of the template, if the Euclidean distance is larger than a threshold value, discarding the sample, and otherwise, keeping the negative sample.
Further, the inputting the target image into the convolutional neural network for forward propagation to the shared convolutional layer to obtain a low-dimensional feature map, including: and (3) passing the target image through a shared convolution layer part of the deep convolution neural network VGG16 or Resnet-50 to obtain a low-dimensional feature map.
Further, the inputting one path of the obtained low-dimensional feature map into the RPN network to obtain positive and negative samples and coordinates includes:
sliding on the acquired low-dimensional feature map by using a 3 x 3 window, mapping to the original target image by taking the center of the sliding window as a point center, and respectively generating a scale of 642、1282、2562If the IOU value of the area mapped by the sliding window and the IOU value of the group Truth are more than 0.7, the candidate area is considered as a positive sample; and if the IOU value of the area mapped by the sliding window and the IOU value of the group Truth are less than 0.3, the candidate area is considered as a negative sample, and positive and negative sample labels and coordinates of all the candidate areas are obtained.
Further, the image averaging processing is performed on the obtained negative samples which are not intersected with the group route, so that similarity comparison based on image brightness features is realized, and the screening of false negative samples similar to the positive samples is completed, and the method includes the following steps:
calculating the mean value of the target image, carrying out the mean value calculation operation on each negative sample with the IOU value of 0 of the group Truth, and comparing the negative samples: if the mean value of the current negative sample is less than the target image mean value minus ten, the negative sample is discarded, otherwise, the negative sample is retained.
Further, the inputting the positive sample, the obtained screened negative sample and the high-dimensional feature map into the ROI Align layer together to extract the features of the positive and negative sample suggested regions includes:
mapping the obtained screened positive and negative samples into a high-dimensional characteristic diagram by using coordinate information;
and partitioning the positive and negative samples into blocks according to a fixed number, fixedly dividing the positive and negative samples into 7 multiplied by 7 blocks, finishing the maximum pooling operation on each region block, and extracting the feature vectors with fixed lengths corresponding to the positive and negative sample suggested regions.
The effect provided in the summary of the invention is only the effect of the embodiment, not all the effects of the invention, and one of the above technical solutions has the following advantages or beneficial effects:
the invention provides a method for detecting a strong negative sample underwater biological target based on fast-RCNN, which solves the problem of sample leakage caused by fuzzy underwater image data, target accumulation and undersize of objects, can effectively omit the phenomenon of false negative samples caused by leakage through secondary processing of negative samples, and provides a negative sample with better quality for model training, thereby leading the model to be better converged and further improving the detection result.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a flowchart of a second method of an embodiment of the present invention.
Detailed Description
In order to clearly explain the technical features of the present invention, the following detailed description of the present invention is provided with reference to the accompanying drawings. The following disclosure provides many different embodiments, or examples, for implementing different features of the invention. To simplify the disclosure of the present invention, the components and arrangements of specific examples are described below. Furthermore, the present invention may repeat reference numerals and/or letters in the various examples. This repetition is for the purpose of simplicity and clarity and does not in itself dictate a relationship between the various embodiments and/or configurations discussed. It should be noted that the components illustrated in the figures are not necessarily drawn to scale. Descriptions of well-known components and processing techniques and procedures are omitted so as to not unnecessarily limit the invention.
Example one
As shown in FIG. 1, a method for detecting a strong negative sample underwater target based on fast-RCNN comprises the following steps:
s1, acquiring a target image data set, inputting the target image into a convolutional neural network for forward propagation to a shared convolutional layer to obtain a low-dimensional feature map;
s2, inputting one path of the obtained low-dimensional feature map into an RPN network to obtain positive and negative samples and coordinates, and continuously transmitting the other path of the low-dimensional feature map forward to obtain a high-dimensional feature map;
s3, carrying out image averaging processing on the obtained negative samples which are not intersected with the Ground Truth, realizing similarity comparison based on image brightness characteristics, and finishing screening out false negative samples which are similar to the positive samples;
s4, inputting the positive sample, the obtained screened negative sample and the high-dimensional feature map into the ROI Align layer together, and extracting the features of the positive and negative sample suggested regions;
and S5, transmitting the acquired characteristics of the suggested region into a full connection layer, and outputting the classification score and the regression coordinate value of the region.
Example two
As shown in FIG. 2, a method for detecting a strong negative sample underwater target based on fast-RCNN comprises the following steps:
s1, acquiring a target image data set, inputting the target image into a convolutional neural network for forward propagation to a shared convolutional layer to obtain a low-dimensional feature map;
s2, inputting one path of the obtained low-dimensional feature map into an RPN network to obtain positive and negative samples and coordinates, and continuously transmitting the other path of the low-dimensional feature map forward to obtain a high-dimensional feature map;
s3, carrying out image averaging processing on the obtained negative samples which are not intersected with the Ground Truth, realizing similarity comparison based on image brightness characteristics, and finishing screening out false negative samples which are similar to the positive samples;
s4, performing image LBP histogram operation on the residual negative samples, realizing similarity comparison based on image texture characteristics, and finishing screening of false negative samples similar to the positive samples;
s5, inputting the positive sample, the obtained screened negative sample and the high-dimensional feature map into the ROI Align layer together, and extracting the features of the positive and negative sample suggested regions;
and S6, transmitting the acquired characteristics of the suggested region into a full connection layer, and outputting the classification score and the regression coordinate value of the region.
In step S1, acquiring the target image dataset specifically includes collecting a large number of underwater images, dividing all the images into a training-validation set and a test set according to a ratio of 7:3, and dividing the images into a training set and a validation set according to a ratio of 8:2 in the training-validation set. The training set is used for training model parameters; the verification set is used for checking the state and convergence condition of the model in the training process and adjusting the hyper-parameters; the test set was used to ultimately evaluate the generalization ability of the model.
Inputting the target image into a convolutional neural network for forward propagation to a shared convolutional layer to obtain a low-dimensional feature map, wherein the step of obtaining the low-dimensional feature map through a shared convolutional layer part of a deep convolutional neural network VGG16 or Resnet-50 specifically comprises the step of obtaining the low-dimensional feature map through the shared convolutional layer part of the deep convolutional neural network VGG16 or Resnet-50. The part comprises a plurality of volumes of base layers and pooling layers, and linear rectification functions are used as activation functions to finish the extraction of the image low-dimensional feature map.
In step S2, inputting one path of the obtained low-dimensional feature map into the RPN network to obtain positive and negative samples and coordinates, which specifically includes: sliding on the acquired low-dimensional feature map by using a 3 x 3 window, mapping to the original target image by taking the center of the sliding window as a point center, and respectively generating a scale of 642、1282、2562If the value of the area mapped by the sliding window and the IOU (intersection ratio, namely the number of pixels contained by the intersection of the two image areas divided by the number of pixels contained by the union of the two image areas) of the Ground Truth is more than 0.7, the candidate area is considered as a positive sample; and if the IOU value of the area mapped by the sliding window and the IOU value of the group Truth are less than 0.3, the candidate area is considered as a negative sample, and positive and negative sample labels and coordinates of all the candidate areas are obtained.
In step S3, performing image averaging processing on the obtained negative samples that are not intersected with the group Truth, implementing similarity comparison based on image brightness features, and completing the screening of false negative samples similar to the positive samples, including: calculating the mean value of the target image, carrying out the mean value calculation operation on each negative sample with the IOU value of 0 of the group Truth, and comparing the negative samples: if the mean value of the current negative sample is less than the target image mean value minus ten, the negative sample is discarded, otherwise, the negative sample is retained. Since the brightness of the target area in the underwater image is far lower than that of the background area, most false negative samples can be primarily screened by the image mean value method starting from the image mean value.
In step S4, performing an image LBP histogram operation on the remaining negative samples, to implement similarity comparison based on image texture features, and completing the screening of false negative samples similar to the positive samples, including:
calculating an LBP histogram (an operator for describing local texture features) of the target image, and connecting the LBP histogram into a feature vector as a texture similarity comparison template;
extracting LBP characteristics from the residual negative samples to obtain an LBP histogram, and connecting the obtained LBP histogram into a characteristic vector;
and calculating the Euclidean distance between the feature vector of the negative sample and the feature vector of the template, if the Euclidean distance is larger than a threshold value, discarding the sample, and otherwise, keeping the negative sample. False negative samples containing targets are basically filtered through the operation of comparing brightness and texture similarity, and therefore strong negative samples are generated for model training. The false negative sample phenomenon caused by label leakage can be effectively omitted, and a negative sample with better quality is provided for model training.
In the training process, the loss function of one picture is divided into two parts of classification loss and regression loss, and the loss function is defined as follows:
Figure BDA0001779737920000061
where i denotes the subscript, p, of each sampleiRepresenting the probability of prediction to a certain class, if the current sample is a positive sample
Figure BDA0001779737920000062
If the current sample is a negative sample, then
Figure BDA0001779737920000071
Indicating that the regression operation was performed only for positive samples; t is tiA pan scaling parameter representing the positive sample to the suggested region,
Figure BDA0001779737920000072
representing the translation scaling parameter from the positive sample to the Ground Truth; classification loss function LclsFor the cross entropy loss function:
Figure BDA0001779737920000073
the regression loss function is the SmoothL1 loss function:
Figure BDA0001779737920000074
parameter lambda is used for weightingThe specific gravity of classification loss and regression loss is balanced.
In step S5, the positive sample, the obtained screened negative sample, and the high-dimensional feature map are input to the ROI Align layer together, and the feature of the positive and negative sample suggested region is extracted, including:
mapping the obtained screened positive and negative samples into a high-dimensional characteristic diagram by using coordinate information;
and partitioning the positive and negative samples into blocks according to a fixed number, fixedly dividing the positive and negative samples into 7 multiplied by 7 blocks, finishing the maximum pooling operation on each region block, and extracting the feature vectors with fixed lengths corresponding to the positive and negative sample suggested regions.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (7)

1. A strong negative sample underwater target detection method based on fast-RCNN is characterized by comprising the following steps:
acquiring a target image data set, inputting the target image into a convolutional neural network for forward propagation to a shared convolutional layer to obtain a low-dimensional feature map;
inputting one path of the obtained low-dimensional characteristic diagram into an RPN network to obtain a positive sample, a negative sample and a coordinate, and continuously transmitting the other path of the low-dimensional characteristic diagram forward to obtain a high-dimensional characteristic diagram;
carrying out image averaging processing on the obtained negative samples which are not intersected with the group Truth, realizing similarity comparison based on image brightness characteristics, and finishing screening out false negative samples which are similar to the positive samples, wherein the negative samples which are not intersected with the group Truth are negative samples of which the IOU value is 0;
inputting the positive sample, the obtained screened negative sample and the high-dimensional feature map into a ROIAlign layer together, and extracting the features of the positive and negative sample suggested areas;
and transmitting the acquired characteristics of the suggested region into a full-connection layer, and outputting the classification score and the regressed coordinate value of the region.
2. The method as claimed in claim 1, wherein said step of performing image averaging on the obtained negative samples that do not intersect with the group Truth to implement similarity comparison based on image brightness features, and after the step of screening out false negative samples similar to the positive samples, further comprises:
and carrying out image LBP histogram calculation on the remaining negative samples, realizing similarity comparison based on image texture characteristics, and finishing screening out false negative samples similar to the positive samples.
3. The method as claimed in claim 2, wherein said performing an image LBP histogram operation on the remaining negative samples to perform similarity comparison based on image texture features, and performing a screening out of false negative samples similar to positive samples comprises:
calculating an LBP histogram of the target image, and connecting the LBP histogram into a feature vector as a texture similarity comparison template;
extracting LBP characteristics from the residual negative samples to obtain an LBP histogram, and connecting the obtained LBP histogram into a characteristic vector;
and calculating the Euclidean distance between the feature vector of the negative sample and the feature vector of the template, if the Euclidean distance is larger than a threshold value, discarding the sample, and otherwise, keeping the negative sample.
4. The method of claim 1 or 2, wherein said inputting the target image into a convolutional neural network for forward propagation to a shared convolutional layer to obtain a low-dimensional feature map, comprises: and (3) passing the target image through a shared convolution layer part of the deep convolution neural network VGG16 or Resnet-50 to obtain a low-dimensional feature map.
5. The method as claimed in claim 1 or 2, wherein inputting the obtained low-dimensional feature map all the way to the RPN network to obtain positive and negative samples and coordinates comprises:
performing on the acquired low-dimensional feature map by using a 3 x 3 windowSliding, mapping to the original target image with the center of the sliding window as the center of the point, respectively producing a scale of 642、1282、2562If the IOU value of the area mapped by the sliding window and the IOU value of the group Truth are more than 0.7, the candidate area is considered as a positive sample; and if the IOU value of the area mapped by the sliding window and the IOU value of the group Truth are less than 0.3, the candidate area is considered as a negative sample, and the positive and negative sample labels and the coordinates of all the candidate areas are obtained.
6. The method as claimed in claim 1 or 2, wherein the step of performing image averaging on the obtained negative samples which do not intersect with the group Truth to realize similarity comparison based on image brightness features and complete the screening out of false negative samples which are similar to positive samples comprises:
calculating the mean value of the target image, carrying out the mean value calculation operation on each negative sample with the IOU value of 0 of the group Truth, and comparing the negative samples: if the mean value of the current negative sample is less than the target image mean value minus ten, the negative sample is discarded, otherwise, the negative sample is retained.
7. The method as claimed in claim 1 or 2, wherein the inputting the positive sample, the obtained screened negative sample and the high-dimensional feature map into the ROIAlign layer together to extract the features of the positive and negative sample suggestion regions comprises:
mapping the obtained screened positive and negative samples into a high-dimensional characteristic diagram by using coordinate information;
and partitioning the positive and negative samples into blocks according to a fixed number, fixedly dividing the positive and negative samples into 7 multiplied by 7 blocks, finishing the maximum pooling operation on each region block, and extracting the feature vectors with fixed lengths corresponding to the positive and negative sample suggested regions.
CN201810986082.XA 2018-08-28 2018-08-28 Strong negative sample underwater target detection method based on fast-RCNN Active CN109165658B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810986082.XA CN109165658B (en) 2018-08-28 2018-08-28 Strong negative sample underwater target detection method based on fast-RCNN

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810986082.XA CN109165658B (en) 2018-08-28 2018-08-28 Strong negative sample underwater target detection method based on fast-RCNN

Publications (2)

Publication Number Publication Date
CN109165658A CN109165658A (en) 2019-01-08
CN109165658B true CN109165658B (en) 2021-08-13

Family

ID=64896962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810986082.XA Active CN109165658B (en) 2018-08-28 2018-08-28 Strong negative sample underwater target detection method based on fast-RCNN

Country Status (1)

Country Link
CN (1) CN109165658B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046254B (en) * 2019-04-18 2022-03-08 阿波罗智联(北京)科技有限公司 Method and apparatus for generating a model
CN110826634B (en) * 2019-11-11 2022-12-30 北京百度网讯科技有限公司 Training method and device of image target detection model, electronic equipment and storage medium
CN113743439A (en) * 2020-11-13 2021-12-03 北京沃东天骏信息技术有限公司 Target detection method, device and storage medium
CN112613425B (en) * 2020-12-24 2022-03-22 山东船舶技术研究院 Target identification system for small sample underwater image
CN116776887B (en) * 2023-08-18 2023-10-31 昆明理工大学 Negative sampling remote supervision entity identification method based on sample similarity calculation

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909572A (en) * 2017-11-17 2018-04-13 合肥工业大学 Pulmonary nodule detection method and system based on image enhancement
CN108230323A (en) * 2018-01-30 2018-06-29 浙江大学 A kind of Lung neoplasm false positive screening technique based on convolutional neural networks
CN108416287A (en) * 2018-03-04 2018-08-17 南京理工大学 A kind of pedestrian detection method excavated based on omission negative sample
CN108416307A (en) * 2018-03-13 2018-08-17 北京理工大学 A kind of Aerial Images road surface crack detection method, device and equipment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803247B (en) * 2016-12-13 2021-01-22 上海交通大学 Microangioma image identification method based on multistage screening convolutional neural network

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909572A (en) * 2017-11-17 2018-04-13 合肥工业大学 Pulmonary nodule detection method and system based on image enhancement
CN108230323A (en) * 2018-01-30 2018-06-29 浙江大学 A kind of Lung neoplasm false positive screening technique based on convolutional neural networks
CN108416287A (en) * 2018-03-04 2018-08-17 南京理工大学 A kind of pedestrian detection method excavated based on omission negative sample
CN108416307A (en) * 2018-03-13 2018-08-17 北京理工大学 A kind of Aerial Images road surface crack detection method, device and equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes Based on Multiscale Rotation Dense Feature Pyramid Networks;Xue Yang等;《remote sensing》;20180118;论文正文 *

Also Published As

Publication number Publication date
CN109165658A (en) 2019-01-08

Similar Documents

Publication Publication Date Title
CN109165658B (en) Strong negative sample underwater target detection method based on fast-RCNN
CN108830285B (en) Target detection method for reinforcement learning based on fast-RCNN
CN112418117B (en) Small target detection method based on unmanned aerial vehicle image
CN111027493B (en) Pedestrian detection method based on deep learning multi-network soft fusion
CN112966691B (en) Multi-scale text detection method and device based on semantic segmentation and electronic equipment
CN108305260B (en) Method, device and equipment for detecting angular points in image
CN111898432B (en) Pedestrian detection system and method based on improved YOLOv3 algorithm
CN110569782A (en) Target detection method based on deep learning
CN109255386B (en) Road pedestrian rapid detection method based on millimeter wave radar and vision fusion
CN108734200B (en) Human target visual detection method and device based on BING (building information network) features
CN109801305B (en) SAR image change detection method based on deep capsule network
CN113799124B (en) Robot flexible grabbing detection method in unstructured environment
CN110008899B (en) Method for extracting and classifying candidate targets of visible light remote sensing image
CN111968124B (en) Shoulder musculoskeletal ultrasonic structure segmentation method based on semi-supervised semantic segmentation
CN111126401A (en) License plate character recognition method based on context information
Barodi et al. An enhanced artificial intelligence-based approach applied to vehicular traffic signs detection and road safety enhancement
Khellal et al. Pedestrian classification and detection in far infrared images
Li et al. Printed/handwritten texts and graphics separation in complex documents using conditional random fields
CN115019133A (en) Method and system for detecting weak target in image based on self-training and label anti-noise
Chen et al. Object detection in 20 questions
CN112991281B (en) Visual detection method, system, electronic equipment and medium
CN114037886A (en) Image recognition method and device, electronic equipment and readable storage medium
CN113808123A (en) Machine vision-based dynamic detection method for liquid medicine bag
CN111582057B (en) Face verification method based on local receptive field
Zhu et al. Scene text relocation with guidance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant