CN105022990A - Water surface target rapid-detection method based on unmanned vessel application - Google Patents

Water surface target rapid-detection method based on unmanned vessel application Download PDF

Info

Publication number
CN105022990A
CN105022990A CN201510368994.7A CN201510368994A CN105022990A CN 105022990 A CN105022990 A CN 105022990A CN 201510368994 A CN201510368994 A CN 201510368994A CN 105022990 A CN105022990 A CN 105022990A
Authority
CN
China
Prior art keywords
target
region
feature
super
pixel block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510368994.7A
Other languages
Chinese (zh)
Other versions
CN105022990B (en
Inventor
肖阳
曹治国
李畅
方智文
朱磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huazhong University of Science and Technology
Original Assignee
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huazhong University of Science and Technology filed Critical Huazhong University of Science and Technology
Priority to CN201510368994.7A priority Critical patent/CN105022990B/en
Publication of CN105022990A publication Critical patent/CN105022990A/en
Application granted granted Critical
Publication of CN105022990B publication Critical patent/CN105022990B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Astronomy & Astrophysics (AREA)
  • Remote Sensing (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a water surface target rapid-detection method based on an unmanned vessel application, and belongs to the technical field of cross of digital image processing and a control system. A target candidate region is obtained through target property analyses; a certain false alarm exists in the candidate region, so a significance region is obtained by utilizing significance analyses; and the target property and significance are combined, the false alarm is rejected, and then an accurate position of a target is obtained. The water surface target rapid-detection method does not have specific target type information, so the water surface target rapid-detection method has good universality; compared with other existing target detection algorithms, the water surface target rapid-detection method has great improvements on the aspect of a target detection effect and also on the aspect of a speed of the method; and the water surface target rapid-detection method has an important guiding significance for automatic obstacle avoidance of an unmanned vessel.

Description

A kind of waterborne target method for quick based on unmanned boat application
Technical field
The invention belongs to Digital Image Processing and control system interleaving techniques field, more specifically, relate to a kind of waterborne target method for quick based on unmanned boat application.
Background technology
Recent years, the research and development about unmanned boat causes the strong interest of numerous ocean military power, and wherein more representational is the U.S. " Sparta " number unmanned boat and Israel " Protector " number unmanned boat.At present, no matter be from civilian or military angle, China is all increasing increasingly to the demand of unmanned boat, and this makes an inspection tour in territorial waters, the field such as pirate and smuggling of hitting seems particularly urgent.In the autonomous navigation of unmanned boat, the quick detection of waterborne target is the basis of unmanned boat automatic obstacle-avoiding.Introduce several object detection method conventional at present below:
(1) based on the object detection method of local feature coupling
Target and image to be detected are described target by key point and key point neighborhood internal information by target detection based on local feature coupling usually, or describe target by the characteristic information in regional area.
2004, David Lowe proposed famous SIFT (Scale-InvariantFeature Transform) local feature description on IJCV, effectively can adapt to yardstick, rotation, affine and visual angle etc. and change the impact brought.This algorithm, by the difference of image pyramid and gaussian kernel filtering, detects that the extreme point in Laplacian space is as unique point, and is described by local 128 dimensional features, makes it have better adaptability and robustness when applying.
(2) object detection method of structure based
Object structures can well reflect target information.Usual object is all be made up of structured features, and such as people is made up of head, trunk and four limbs usually, and face is made up of face usually, and car is made up of vehicle body and wheel usually.Target can detect by this structured message accurately from complex scene.
2010, Pedro Felzenszwalb proposed DPM model on PAMI.Target is divided into several different parts by DPM model, judges whether this object needs the target detected when detecting according to the position relationship between the matching degree of different parts and parts.DPM is one of best at present algorithm of target detection, and obtains the detection champion on VOC for successive years.
(3) based on the algorithm of target detection of degree of depth study
The concept of degree of depth study comes from the research of artificial neural network.Degree of depth study forms more abstract high level by combination low-level feature and represents attribute classification or feature, to find that the distributed nature of data represents.CNN is also convolutional neural networks, is to use maximum a kind of degree of depth learning models at present.
Within 2014, Ross Girshick proposes R-CNN method on CVPR, combines, object candidate area and CNN for target detection.Target detection is divided into two parts by R-CNN: find object candidate area and target identification.The full articulamentum of CNN structure is replaced to SVM classifier by R-CNN, and uses the first half of CNN structure for feature extraction.R-CNN obtains very good effect at target detection neighborhood, also becomes an important branch of target detection neighborhood.
Although there is the algorithm of many target detection at present, is no matter feature based coupling, DPM algorithm or based on R-CNN algorithm, all there is the problem that universality is poor.It is relatively more effective to the detection of simple target, such as, only detect the ship of a certain type.And in the autonomous navigation of unmanned boat, the target type faced numerous (such as pleasure boat, sailing boat, warship, buoy, floating thing, reef etc.), and the attitude of target, view transformation are all very large, and therefore current algorithm of target detection can not well adapt to real natural scene.In addition because unmanned boat is towards practical application, therefore higher to the requirement of real-time of algorithm, and current DPM, R-CNN algorithm complex is too high, real-time is more difficult to be met.
In sum, although have many related algorithms at present in target detection, all because of reasons such as algorithm universality and complexities, be difficult to apply it in the automatic obstacle-avoiding of unmanned boat.
Summary of the invention
For above defect or the Improvement requirement of prior art, the invention provides a kind of waterborne target method for quick based on unmanned boat application, to realize the automatic obstacle-avoiding of unmanned boat and autonomous navigation.The present invention is without any specific objective type information, and therefore universality is better.Algorithm complex of the present invention is lower simultaneously, can detect during unmanned boat independently navigates by water the various barriers run in real time.
The invention provides a kind of waterborne target method for quick based on unmanned boat application, comprise the following steps:
Step 1 trains sorter and interlayer sorter in layer, and wherein, in described layer, sorter is used for judging whether current candidate region is target area at the every one deck of the metric space built, and described interlayer sorter is used for the weighted calculation between different layers;
Step 2 utilizes sorter and described interlayer sorter in described layer to carry out Objective analysis to original image, obtains final object candidate area, comprises following sub-step:
(2-1) change of scale is carried out to described original image, build pyramid model, obtain the image of different scale size, be designated as L 1, L 2..., L m, wherein, M represents the number of plies of the metric space of structure;
(2-2) at each tomographic image L iin, adopt the method for moving window to extract the region of fixed size to each position, calculate the NG feature in this region, and by the score value in this region of classifier calculated in described layer, obtain the object candidate area of different layers;
(2-3) by described interlayer sorter, marking is weighted to the object candidate area obtained at different layers, and sorts according to described weighting marking result;
(2-4) maximum value suppression is carried out to object candidate area, and obtain described final object candidate area;
Step 3 trains random forest to return device and Multiscale Fusion weight, and wherein, described random forest returns the significance value of device for each super-pixel block after computed segmentation, and described Multiscale Fusion weight is for merging the Saliency maps obtained under different scale;
Step 4 utilizes described random forest to return device and described Multiscale Fusion weight carries out significance analysis to described original image, obtains final Saliency maps;
The candidate region comprising a large amount of false-alarm, according to described final object candidate area and described final Saliency maps, is rejected, is finally obtained the accurate location of target by step 5.
In general, the above technical scheme conceived by the present invention compared with prior art, has following beneficial effect:
The present invention can detect during unmanned boat independently navigates by water the various barriers run into fast.By the image of video camera shooting on process unmanned boat, real-time perception surrounding enviroment, realize the autonomous navigation of unmanned boat.To the image of camera acquisition, obtain object candidate area by Objective, and reject target false-alarm in conjunction with conspicuousness knowledge.No matter the present invention, compared to other algorithm of target detection existing, is in the Detection results of target, or have greatly improved in the speed of algorithm, has important directive significance to the automatic obstacle-avoiding of unmanned boat.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the waterborne target method for quick that the present invention is based on unmanned boat application;
Fig. 2 is the process flow diagram that detection-phase Objective of the present invention is analyzed;
Fig. 3 is the result figure of the present invention by obtaining after Objective algorithm process;
Fig. 4 is the process flow diagram of detection-phase significance analysis of the present invention;
Fig. 5 is the result figure of detection-phase significance analysis of the present invention.
Embodiment
In order to make object of the present invention, technical scheme and advantage clearly understand, below in conjunction with drawings and Examples, the present invention is further elaborated.Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.In addition, if below in described each embodiment of the present invention involved technical characteristic do not form conflict each other and just can mutually combine.
The present invention is divided into three parts: first, and training objective model also utilizes the model trained to treat detected image to carry out Objective analysis, and obtain object candidate area, object candidate area now can exist certain false-alarm; Then, train conspicuousness model also to utilize the model trained to treat detected image to carry out significance analysis, obtain Saliency maps; Finally, Objective is combined with conspicuousness, reject target false-alarm.
Figure 1 shows that the process flow diagram of the waterborne target method for quick that the present invention is based on unmanned boat application, specifically comprise the following steps:
Step 1 training objective model.This training stage mainly reaches two objects: sorter, training interlayer sorter in training layer.In its middle level, sorter is used for judging whether current candidate region is target area at every one deck; Interlayer sorter is used for the weighted calculation between different layers.In embodiments of the present invention, PASCAL VOC 2007 as training set, is wherein comprised 10000 figure by the training stage, and wherein 5000 are used for training, and 5000 are used for testing.Step 1 specifically comprises following sub-step:
(1-1) train sorter in layer, to training set sample extracting directly target area, and be compressed into the block of fixed size, namely can be used as positive sample, in embodiments of the present invention, the block of unified use 8 × 8 size.To training set sample random selecting candidate blocks, as long as namely candidate blocks can be used as negative sample with the overlapping fixed threshold that is less than of target area, in embodiments of the present invention, fixed threshold elects 50% as;
(1-2) train interlayer sorter, training set sample is carried out to the adjustment of yardstick, obtain the image of different layers, get the block of 8 × 8 sizes at random, under reverting to original graph according to compression factor, be greater than 50% if overlapping with target area, as positive sample, otherwise as negative sample.
Step 2 utilizes the Objective model trained in step 1 to carry out Objective analysis to original image, and Objective analysis is a kind of method obtaining object candidate area fast.Figure 2 shows that and specifically comprise following sub-step by the process flow diagram that detection-phase Objective of the present invention is analyzed:
(2-1) change of scale is carried out to original image, build pyramid model, obtain the image of different scale size, be set to L 1, L 2..., L m, wherein M represents the number of plies of the metric space of structure, and in embodiments of the present invention, M gets 33;
(2-2) at each tomographic image L i(i=1,2, M) in, adopt the mode of moving window, each position is extracted to the region of 8 × 8 sizes, and calculate the normalized gradient (NormedGradients in this region, hereinafter referred to as NG) feature, and pass through the score value in this region of classifier calculated in layer, this score value is used for measuring the possibility that this position is object candidate area, so just can obtain the object candidate area of different layers.In embodiments of the present invention, when calculating NG feature, getting horizontal direction maximum of gradients in all passages is g x, vertical gradient maximal value is g y, and by formula min (| g x|+| g y|, 255) calculate each eigenwert put;
(2-3) after step (2-2) process, for each tomographic image L ieach position candidate have a score value, for measuring the possibility that this position is object candidate area.Consider in the present invention and construct many layers, so be weighted marking by interlayer sorter to the object candidate area obtained at different layers, and sort according to the weight score of object candidate area, this weight score value is higher, and to show that this region comprises the possibility of target larger;
(2-4) after step (2-3) process, in order to reduce a large amount of coverings between candidate region, maximum value suppression can be carried out to object candidate area, obtaining final object candidate area.Figure 3 shows that the result figure of the present invention by obtaining after Objective algorithm process, what wherein the left side represented is original graph, and the right is result figure.Contrast original graph and result figure, can find that Objective algorithm of the present invention can obtain object candidate area preferably, but also can there are some false-alarms, therefore need to be further processed object candidate area.
Step 3 trains conspicuousness model.This training stage mainly reaches two objects: training random forest returns device, training Multiscale Fusion weight.First by the partitioning algorithm (Graph-based Segmentation) based on graph model, multi-scale division can be carried out to original image.On each yardstick, can obtain many independently regions after segmentation, unification is herein referred to as super-pixel block.Random forest returns the significance value of device for each super-pixel block after computed segmentation; Multiscale Fusion weight for merging the Saliency maps obtained under different scale, and obtains final Saliency maps.In embodiments of the present invention, using MSRA-B as training set, wherein comprise 5000 figure, each figure has corresponding artificial target result figure.Step 3 specifically comprises following sub-step:
(3-1) train random forest to return device, adopt classical Graph-based Segmentation method, former figure is carried out to the multi-scale division of N layer, in embodiments of the present invention, N gets 15.On each yardstick, to each the super-pixel block R obtained after segmentation, corresponding handmarking result figure finds corresponding region H.If being marked with of contained pixel 80% belongs to foreground/background in the H of region, then super-pixel block R is labeled as foreground/background, otherwise abandons this super-pixel block R, wherein prospect refers to and comprises order target area, and background refers to nontarget area.The random forest of the training sample study standard of mark is utilized to return device;
(3-2) Multiscale Fusion weight is trained, if the multiple dimensioned Saliency maps that each training sample obtains is { S 1, S 2..., S n, corresponding handmarking's result is G, trains multiple dimensioned linear fusion weight w in the mode of least square n, argmin represents the w got when square error is minimum n, computing formula is as follows:
arg min w n || G - Σ n = 1 N w n S n || 2
Step 4 utilizes the conspicuousness model trained in step 3 to carry out significance analysis to original image, conspicuousness is the visual cognition angle from people, and by the vision mode that physiology, psychology build, therefore conspicuousness can be good at reflecting in scene the information that attracts people's attention.Figure 4 shows that the process flow diagram of detection-phase significance analysis of the present invention, specifically comprise following sub-step:
(4-1) adopt classical Graph-based Segmentation method, former figure is carried out to the multi-scale division of N layer, if the segmentation figure obtained is T 1, T 2..., T n, wherein, each tomographic image T after segmentation iall be made up of some independently super-pixel block;
(4-2) to each tomographic image T after segmentation iin each super-pixel block, calculate three category features: area attribute feature, region contrast feature, region and background contrasts feature.For area attribute feature, calculate the feature such as color, texture, histogram of this super-pixel block in different colours space (RGB, LAB, HSV); For region contrast feature, calculate the contrast of this super-pixel block and its all of its neighbor block, wherein computer card side's distance between histogram, non-histogram feature calculation absolute difference; For region and background contrasts feature, by the peripheral regions of image region as a setting, calculate the contrast of this super-pixel block and background according to the computing method of region contrast feature.Finally above-mentioned three category features are together in series as the feature of this super-pixel block;
(4-3) extract characteristic of correspondence according to step (4-2), and utilize the random forest recurrence device trained in step 3 to carry out recurrence calculating, obtain each tomographic image T after splitting iin significance value corresponding to each super-pixel block, finally can obtain each tomographic image T icorresponding Saliency maps C i;
(4-4) by the Multiscale Fusion weight that step 3 trains, by the multiple dimensioned Saliency maps { C obtained 1, C 2..., C ncarry out linear weighted function and obtain final Saliency maps.Figure 5 shows that the result figure of detection-phase significance analysis of the present invention, what wherein the left side represented is original graph, and the right is result figure.Contrast original graph and result figure, can find that conspicuousness can obtain the marking area in image preferably.In result figure, it is stronger that brighter place represents this position conspicuousness, and this position is that the possibility of target is larger, therefore can be further analyzed object candidate area by conspicuousness, obtains target accurate location.
Step 5, after step 2 processes, just can obtain object candidate area.After step 4 processes, just can obtain the Saliency maps of target.Owing to containing a large amount of false alarm informations in the candidate region that step 2 obtains, so for each candidate region, the Saliency maps that can be obtained by step 4 is confirmed further, the candidate region comprising a large amount of false-alarm is rejected, finally obtains the accurate location of target.
Those skilled in the art will readily understand; the foregoing is only preferred embodiment of the present invention; not in order to limit the present invention, all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (7)

1., based on a waterborne target method for quick for unmanned boat application, it is characterized in that, comprising:
Step 1 trains sorter and interlayer sorter in layer, and wherein, in described layer, sorter is used for judging whether current candidate region is target area at the every one deck of the metric space built, and described interlayer sorter is used for the weighted calculation between different layers;
Step 2 utilizes sorter and described interlayer sorter in described layer to carry out Objective analysis to original image, obtains final object candidate area, comprises following sub-step:
(2-1) change of scale is carried out to described original image, build pyramid model, obtain the image of different scale size, be designated as L 1, L 2..., L m, wherein, M represents the number of plies of the metric space of structure;
(2-2) at each tomographic image L iin, adopt the method for moving window to extract the region of fixed size to each position, calculate the NG feature in this region, and by the score value in this region of classifier calculated in described layer, obtain the object candidate area of different layers;
(2-3) by described interlayer sorter, marking is weighted to the object candidate area obtained at different layers, and sorts according to described weighting marking result;
(2-4) maximum value suppression is carried out to object candidate area, and obtain described final object candidate area;
Step 3 trains random forest to return device and Multiscale Fusion weight, and wherein, described random forest returns the significance value of device for each super-pixel block after computed segmentation, and described Multiscale Fusion weight is for merging the Saliency maps obtained under different scale;
Step 4 utilizes described random forest to return device and described Multiscale Fusion weight carries out significance analysis to described original image, obtains final Saliency maps;
The candidate region comprising a large amount of false-alarm, according to described final object candidate area and described final Saliency maps, is rejected, is finally obtained the accurate location of target by step 5.
2. the method for claim 1, is characterized in that, described step 1 comprises following sub-step:
(1-1) sorter in described layer is trained: to training set sample extracting directly target area, and the block being compressed into fixed size is as positive sample, to described training set sample random selecting candidate blocks, described candidate blocks is less than fixed threshold then as negative sample with the overlapping of described target area;
(1-2) described interlayer sorter is trained: the adjustment described training set sample being carried out to yardstick, obtain the image of different layers, the block of random selecting fixed size, under reverting to original graph according to compression factor, described fixed threshold is greater than, as positive sample, otherwise as negative sample if overlapping with described target area.
3. the method for claim 1, is characterized in that, when calculating described NG feature in described step (2-2), getting horizontal direction maximum of gradients in all passages is g x, vertical gradient maximal value is g y, and pass through formula: min (| g x|+| g y|, 255) calculate each eigenwert put.
4. the method according to any one of claim 1-3, is characterized in that, described step 3 comprises following sub-step:
(3-1) described random forest is trained to return device: to adopt the partitioning algorithm based on graph model to carry out multi-scale division to described original image, on each yardstick, to each super-pixel block obtained after segmentation, corresponding handmarking result figure finds corresponding region, if the certain proportion that is marked with of contained pixel belongs to foreground/background in described corresponding region, then this super-pixel block is labeled as foreground/background, otherwise abandon this super-pixel block, wherein, described prospect refers to and comprises order target area, described background refers to nontarget area, the random forest of the training sample study standard of mark is utilized to return device,
(3-2) described Multiscale Fusion weight w is trained n, computing formula is as follows:
argmin w n | | G - Σ n = 1 N w n S n | | 2
Wherein, { S 1, S 2..., S nrepresent the multiple dimensioned Saliency maps that each training sample obtains; G represents corresponding handmarking's result; Argmin represents the Multiscale Fusion weight of getting when square error is minimum.
5. the method according to any one of claim 1-3, is characterized in that, described step 4 comprises following sub-step:
(4-1) adopt the multi-scale division of based on the partitioning algorithm of graph model, described original image being carried out to N layer, the segmentation figure remembered is T 1, T 2..., T n, wherein, each tomographic image T after segmentation iall be made up of some independently super-pixel block;
(4-2) to each tomographic image T after segmentation iin each super-pixel block, calculate three category features: area attribute feature, region contrast feature, region and background contrasts feature;
(4-3) extract characteristic of correspondence according to described step (4-2), and utilize described random forest recurrence device to carry out recurrence calculating, obtain each tomographic image T after splitting iin significance value corresponding to each super-pixel block, finally can obtain each tomographic image T icorresponding Saliency maps C i;
(4-4) by described Multiscale Fusion weight, by the multiple dimensioned Saliency maps { C obtained 1, C 2..., C ncarry out linear weighted function, obtain described final Saliency maps.
6. method as claimed in claim 5, is characterized in that, in described step (4-2), for described area attribute feature, calculate the color of this super-pixel block in different colours space, texture, histogram feature; For described region contrast feature, calculate the contrast of this super-pixel block and its all of its neighbor block, wherein computer card side's distance between histogram, non-histogram feature calculation absolute difference; For described region and background contrasts feature, by the peripheral regions of image region as a setting, calculate the contrast of this super-pixel block and background according to the computing method of region contrast feature.
7. method as claimed in claim 5, is characterized in that, in described step (4-2), be together in series this three category feature as the feature of this super-pixel block.
CN201510368994.7A 2015-06-29 2015-06-29 A kind of waterborne target rapid detection method based on unmanned boat application Active CN105022990B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510368994.7A CN105022990B (en) 2015-06-29 2015-06-29 A kind of waterborne target rapid detection method based on unmanned boat application

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510368994.7A CN105022990B (en) 2015-06-29 2015-06-29 A kind of waterborne target rapid detection method based on unmanned boat application

Publications (2)

Publication Number Publication Date
CN105022990A true CN105022990A (en) 2015-11-04
CN105022990B CN105022990B (en) 2018-09-21

Family

ID=54412945

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510368994.7A Active CN105022990B (en) 2015-06-29 2015-06-29 A kind of waterborne target rapid detection method based on unmanned boat application

Country Status (1)

Country Link
CN (1) CN105022990B (en)

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106444759A (en) * 2016-09-29 2017-02-22 浙江嘉蓝海洋电子有限公司 Automatic homeward voyaging method and automatic homeward voyaging system of unmanned boat
CN106530324A (en) * 2016-10-21 2017-03-22 华中师范大学 Visual cortex mechanism simulated video object tracking method
CN106845408A (en) * 2017-01-21 2017-06-13 浙江联运知慧科技有限公司 A kind of street refuse recognition methods under complex environment
CN106980814A (en) * 2016-01-15 2017-07-25 福特全球技术公司 With the pedestrian detection of conspicuousness map
CN107506766A (en) * 2017-08-25 2017-12-22 沈阳东软医疗系统有限公司 Image partition method and device
CN107844750A (en) * 2017-10-19 2018-03-27 华中科技大学 A kind of water surface panoramic picture target detection recognition methods
CN108121991A (en) * 2018-01-06 2018-06-05 北京航空航天大学 A kind of deep learning Ship Target Detection method based on the extraction of edge candidate region
CN108303747A (en) * 2017-01-12 2018-07-20 清华大学 The method for checking equipment and detecting gun
CN108399430A (en) * 2018-02-28 2018-08-14 电子科技大学 A kind of SAR image Ship Target Detection method based on super-pixel and random forest
CN108681691A (en) * 2018-04-09 2018-10-19 上海大学 A kind of marine ships and light boats rapid detection method based on unmanned water surface ship
CN108765458A (en) * 2018-04-16 2018-11-06 上海大学 High sea situation unmanned boat sea-surface target dimension self-adaption tracking based on correlation filtering
CN109117838A (en) * 2018-08-08 2019-01-01 哈尔滨工业大学 Object detection method and device applied to unmanned boat sensory perceptual system
CN109242884A (en) * 2018-08-14 2019-01-18 西安电子科技大学 Remote sensing video target tracking method based on JCFNet network
CN110118561A (en) * 2019-06-10 2019-08-13 华东师范大学 A kind of unmanned boat paths planning method and unmanned boat
CN110174895A (en) * 2019-05-31 2019-08-27 中国船舶重工集团公司第七0七研究所 A kind of verification of unmanned boat Decision of Collision Avoidance and modification method
CN110188474A (en) * 2019-05-31 2019-08-30 中国船舶重工集团公司第七0七研究所 Decision of Collision Avoidance method based on unmanned surface vehicle
WO2020107716A1 (en) * 2018-11-30 2020-06-04 长沙理工大学 Target image segmentation method and apparatus, and device
CN111429435A (en) * 2020-03-27 2020-07-17 王程 Rapid and accurate cloud content detection method for remote sensing digital image
CN112418019A (en) * 2020-11-08 2021-02-26 国家电网有限公司 Aerial power communication optical cable inspection system and method
CN112417931A (en) * 2019-08-23 2021-02-26 河海大学常州校区 Method for detecting and classifying water surface objects based on visual saliency
CN113177358A (en) * 2021-04-30 2021-07-27 燕山大学 Soft measurement method for cement quality based on fuzzy fine-grained feature extraction
ES2912040A1 (en) * 2020-11-24 2022-05-24 Iglesias Rodrigo Garcia Delivery system of a consumer good (Machine-translation by Google Translate, not legally binding)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271514A (en) * 2007-03-21 2008-09-24 株式会社理光 Image detection method and device for fast object detection and objective output
US20120147189A1 (en) * 2010-12-08 2012-06-14 GM Global Technology Operations LLC Adaptation for clear path detection using reliable local model updating
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN104392228A (en) * 2014-12-19 2015-03-04 中国人民解放军国防科学技术大学 Unmanned aerial vehicle image target class detection method based on conditional random field model

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271514A (en) * 2007-03-21 2008-09-24 株式会社理光 Image detection method and device for fast object detection and objective output
US20120147189A1 (en) * 2010-12-08 2012-06-14 GM Global Technology Operations LLC Adaptation for clear path detection using reliable local model updating
CN103729848A (en) * 2013-12-28 2014-04-16 北京工业大学 Hyperspectral remote sensing image small target detection method based on spectrum saliency
CN104392228A (en) * 2014-12-19 2015-03-04 中国人民解放军国防科学技术大学 Unmanned aerial vehicle image target class detection method based on conditional random field model

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LINGFU KONG 等: "Salient region detection: an integration approach based on image pyramid and region property", 《COMPUTER VISION IET》 *
郭雷 等: "结合视觉显著性和空间金字塔的遥感图像机场检测", 《西北工业大学学报》 *

Cited By (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106980814A (en) * 2016-01-15 2017-07-25 福特全球技术公司 With the pedestrian detection of conspicuousness map
CN106444759A (en) * 2016-09-29 2017-02-22 浙江嘉蓝海洋电子有限公司 Automatic homeward voyaging method and automatic homeward voyaging system of unmanned boat
CN106530324A (en) * 2016-10-21 2017-03-22 华中师范大学 Visual cortex mechanism simulated video object tracking method
CN108303747A (en) * 2017-01-12 2018-07-20 清华大学 The method for checking equipment and detecting gun
CN106845408A (en) * 2017-01-21 2017-06-13 浙江联运知慧科技有限公司 A kind of street refuse recognition methods under complex environment
CN106845408B (en) * 2017-01-21 2023-09-01 浙江联运知慧科技有限公司 Street garbage identification method under complex environment
CN107506766A (en) * 2017-08-25 2017-12-22 沈阳东软医疗系统有限公司 Image partition method and device
CN107506766B (en) * 2017-08-25 2020-03-17 东软医疗系统股份有限公司 Image segmentation method and device
CN107844750B (en) * 2017-10-19 2020-05-19 华中科技大学 Water surface panoramic image target detection and identification method
CN107844750A (en) * 2017-10-19 2018-03-27 华中科技大学 A kind of water surface panoramic picture target detection recognition methods
CN108121991A (en) * 2018-01-06 2018-06-05 北京航空航天大学 A kind of deep learning Ship Target Detection method based on the extraction of edge candidate region
CN108399430A (en) * 2018-02-28 2018-08-14 电子科技大学 A kind of SAR image Ship Target Detection method based on super-pixel and random forest
CN108681691A (en) * 2018-04-09 2018-10-19 上海大学 A kind of marine ships and light boats rapid detection method based on unmanned water surface ship
CN108765458A (en) * 2018-04-16 2018-11-06 上海大学 High sea situation unmanned boat sea-surface target dimension self-adaption tracking based on correlation filtering
CN108765458B (en) * 2018-04-16 2022-07-12 上海大学 Sea surface target scale self-adaptive tracking method of high-sea-condition unmanned ship based on correlation filtering
CN109117838A (en) * 2018-08-08 2019-01-01 哈尔滨工业大学 Object detection method and device applied to unmanned boat sensory perceptual system
CN109117838B (en) * 2018-08-08 2021-10-12 哈尔滨工业大学 Target detection method and device applied to unmanned ship sensing system
CN109242884A (en) * 2018-08-14 2019-01-18 西安电子科技大学 Remote sensing video target tracking method based on JCFNet network
CN109242884B (en) * 2018-08-14 2020-11-20 西安电子科技大学 Remote sensing video target tracking method based on JCFNet network
WO2020107716A1 (en) * 2018-11-30 2020-06-04 长沙理工大学 Target image segmentation method and apparatus, and device
CN110174895A (en) * 2019-05-31 2019-08-27 中国船舶重工集团公司第七0七研究所 A kind of verification of unmanned boat Decision of Collision Avoidance and modification method
CN110188474A (en) * 2019-05-31 2019-08-30 中国船舶重工集团公司第七0七研究所 Decision of Collision Avoidance method based on unmanned surface vehicle
CN110118561A (en) * 2019-06-10 2019-08-13 华东师范大学 A kind of unmanned boat paths planning method and unmanned boat
CN112417931A (en) * 2019-08-23 2021-02-26 河海大学常州校区 Method for detecting and classifying water surface objects based on visual saliency
CN112417931B (en) * 2019-08-23 2024-01-26 河海大学常州校区 Method for detecting and classifying water surface objects based on visual saliency
CN111429435A (en) * 2020-03-27 2020-07-17 王程 Rapid and accurate cloud content detection method for remote sensing digital image
CN112418019A (en) * 2020-11-08 2021-02-26 国家电网有限公司 Aerial power communication optical cable inspection system and method
ES2912040A1 (en) * 2020-11-24 2022-05-24 Iglesias Rodrigo Garcia Delivery system of a consumer good (Machine-translation by Google Translate, not legally binding)
CN113177358A (en) * 2021-04-30 2021-07-27 燕山大学 Soft measurement method for cement quality based on fuzzy fine-grained feature extraction
CN113177358B (en) * 2021-04-30 2022-06-03 燕山大学 Soft measurement method for cement quality based on fuzzy fine-grained feature extraction

Also Published As

Publication number Publication date
CN105022990B (en) 2018-09-21

Similar Documents

Publication Publication Date Title
CN105022990A (en) Water surface target rapid-detection method based on unmanned vessel application
Shao et al. Saliency-aware convolution neural network for ship detection in surveillance video
Zhang et al. S-CNN-based ship detection from high-resolution remote sensing images
CN106384344B (en) A kind of remote sensing image surface vessel target detection and extracting method
CN109740460B (en) Optical remote sensing image ship detection method based on depth residual error dense network
CN106022232A (en) License plate detection method based on deep learning
Bovcon et al. WaSR—A water segmentation and refinement maritime obstacle detection network
CN106682586A (en) Method for real-time lane line detection based on vision under complex lighting conditions
CN104392228A (en) Unmanned aerial vehicle image target class detection method based on conditional random field model
CN110008900B (en) Method for extracting candidate target from visible light remote sensing image from region to target
CN110458160A (en) A kind of unmanned boat waterborne target recognizer based on depth-compression neural network
CN110472500A (en) A kind of water surface sensation target fast algorithm of detecting based on high speed unmanned boat
Li et al. High-level visual features for underwater place recognition
CN112818905B (en) Finite pixel vehicle target detection method based on attention and spatio-temporal information
CN106203439B (en) The homing vector landing concept of unmanned plane based on marker multiple features fusion
Zhang et al. Research on unmanned surface vehicles environment perception based on the fusion of vision and lidar
Mistry et al. Survey: Vision based road detection techniques
Zang et al. Traffic lane detection using fully convolutional neural network
CN108765439A (en) A kind of sea horizon detection method based on unmanned water surface ship
Sun et al. IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes
Shi et al. Obstacle type recognition in visual images via dilated convolutional neural network for unmanned surface vehicles
CN103810487A (en) Method and system for target detection and identification of aerial ocean images
Larsson et al. Latent space metric learning for sidescan sonar place recognition
CN108681691A (en) A kind of marine ships and light boats rapid detection method based on unmanned water surface ship
Shi et al. Object detection based on saliency and sea-sky line for USV vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant