CN108229319A - The ship video detecting method merged based on frame difference with convolutional neural networks - Google Patents
The ship video detecting method merged based on frame difference with convolutional neural networks Download PDFInfo
- Publication number
- CN108229319A CN108229319A CN201711226281.2A CN201711226281A CN108229319A CN 108229319 A CN108229319 A CN 108229319A CN 201711226281 A CN201711226281 A CN 201711226281A CN 108229319 A CN108229319 A CN 108229319A
- Authority
- CN
- China
- Prior art keywords
- ship
- frame
- video
- region
- roi region
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 18
- 238000013527 convolutional neural network Methods 0.000 title claims abstract description 11
- 239000000284 extract Substances 0.000 claims abstract description 22
- 239000013598 vector Substances 0.000 claims description 16
- 238000012549 training Methods 0.000 claims description 9
- 230000004044 response Effects 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 4
- 238000003707 image sharpening Methods 0.000 claims description 4
- 238000012544 monitoring process Methods 0.000 claims description 3
- 238000007781 pre-processing Methods 0.000 claims description 2
- 238000001514 detection method Methods 0.000 abstract description 20
- 238000012986 modification Methods 0.000 abstract description 3
- 230000004048 modification Effects 0.000 abstract description 3
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 8
- 238000000605 extraction Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 230000035772 mutation Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 239000003643 water by type Substances 0.000 description 2
- 239000010426 asphalt Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000010339 dilation Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/50—Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Image Analysis (AREA)
Abstract
Based on the ship video detecting method that frame difference is merged with convolutional neural networks, including four parts:Video is pre-processed, obtains the ROI region of each frame and extracts shallow-layer feature, the high-level characteristic of each frame figure is obtained with the VGG16 networks of modification, each frame ROI region ship notable figure is predicted and extracts ship target.The present invention takes full advantage of the contact between frame before and after video, reduce the interference of background, moving ship is accurately positioned, obtain the region of ship movement, it is detected compared to the ship saliency of low-level feature is only utilized, the detection of ship video can either be directly applied to, reduce the infull situation of detection ship again, it is stronger to the adaptability of complicated inland river moving ship scene, accuracy of detection higher, it solves the problems, such as that the detection of inland navigation craft target conspicuousness is inaccurate, there is high actual application value.
Description
Technical field
The invention belongs to technical field of image processing, are related to Computer Vision i.e. Computer Vision Detection Technique, are used for
The video detection of inland river moving ship is a kind of ship video detection side merged based on frame difference with convolutional neural networks
Method.
Background technology
Inland water transport is with respect to highway transportation, and freight rate is cheap, about highway 1/10, especially suitable for the transport of lot item, application
Widely.Although gradually having laid monitoring camera along navigation channel at present, monitor video automation is also relatively backward.Water surface wave
Light is clear, the water surface does not synchronize like asphalt surface poor contrast, stern and the interference such as line is dragged undoubtedly to increase navigation channel video detection
Difficulty.In addition, in order to accurately control the ship and status of the water area in shipping, avoid stranded or hit the generation of bridge event, with greater need for standard
Ship really is detected, in order to subsequent ship size detection.Therefore, how accurately moving ship to be detected seems
Particularly important and current research hot issue has important actual application value and major economic value.
Presently, there are many ship detecting technologies, mainly for individual ship picture detect ship, river transport monitoring regard
A frame is chosen in frequency to be detected, and when detecting only merely using the color of ship target, texture, position in image
Then the shallow-layers feature such as information carries out the methods of dividing, to obtain target ship to vessel area.Such method not only timeliness
Property it is not high, detection speed is slow, and applicable ship environment is relatively limited, and robustness is not high, it is impossible to apply in video detection.
Analysis ship video can be seen that many features:It is under the same angle in inland river and sighting distance by fixed video camera shooting due to video
Status of the water area, and vessel motion speed is slower, and displacement is not violent, therefore illumination is not present between the frame and frame of ship video
Situations such as mutation, vessel position mutation, the background of front and rear two frame is respectively provided with target significantly to be contacted.Based on these of video
Feature, the present invention design a kind of ship detecting method for river transport monitor video.
Invention content
The problem to be solved in the present invention is:Existing ship detecting technology does not make full use of the feature of ship video to detect
Accuracy rate is unable to meet demand, and it is limited to be applicable in scene.
The technical scheme is that:Based on the ship video detecting method that frame difference is merged with convolutional neural networks,
Ship detecting is carried out according to river transport monitor video, is included the following steps:
Step S1, makees the video of input image preprocessing, and the pretreatment includes filtering and eliminating noise and image sharpening;
Step S2, asks for video frame difference, and the vessel position that former frame detects show that the ship that needs detect can
As ROI region, shallow-layer feature is extracted to ROI region division of cells domain for energy domain of the existence;
Step S3 changes VGG16 network models:Conv3 convolutional layers to VGG16, conv4 layers, in conv5 convolutional layers
Each convolutional layer adds shortcut connection two-by-two, forms residual error network infrastructure;Simultaneously in conv5-3 convolution
Layer additionally adds the convolution kernel of one 1 × 1 below;Then, with the ship picture under the practical inland river scene obtained by video and
ImageNet data sets go to train the VGG16 networks, and each frame for inputting ship video is extracted with trained VGG16 networks
High-level characteristic only extracts the high-level characteristic of each frame of video that conv5-3 convolutional layers obtain;
Step S4 using the significance of characteristic vector prediction ROI region region-by-region extracted, obtains the aobvious of ROI region
Work property figure, extracts ship target.
Further, step S2 is specially:
Difference is asked between the frame and frame of video, former frame picture is subtracted with a later frame picture, obtains corresponding pixel points
Gray value differences, set a threshold value, by difference be less than threshold value part be considered as not changed region, i.e. background area,
And difference makees the region connectivity analysis, obtains possibility more than the region of variation that the part of threshold value is the movement of target ship
There are the ROI region of ship, while the vessel position that previous frame is detected also is labeled as ROI region;To entire ROI region with
Machine chooses N number of pixel, and the pixel around this N number of pixel is divided into N number of zonule by distance, each region is carried
It takes including RGB color histogram, LAB color histograms, hsv color histogram, Gabor filter response, maximum Gabor responses
Shallow-layer feature inside.
Step S4 is specially:It, then will be each using the ship picture training softmax graders under practical inland river scene
The ROI region shallow-layer feature and the high-level characteristic of the frame that frame extracts are stretched as a n dimensional vector n, and be spliced into a feature respectively
Vector inputs subsequent two full articulamentums, then inputs last softmax graders, so as to judge that ship is shown to ROI region
Work property, extracts the ship in video.
Brightness change that the present invention takes into account ship video is small, vessel motion speed is slow, the smaller feature of frame difference,
Variation using the image slices vegetarian refreshments between two frames obtains the ROI detection zones for needing ship, then ROI region is carried with contacting
Shallow-layer feature is taken to be detected, makes ship's fix more accurate, background interference smaller improves the verification and measurement ratio of ship.In view of ship
It is faint in color, to feature request height, in order to preferably extract ship's particulars, VGG16 networks is modified, residual error network is basic
Structure so that feature that VGG16 networks extract ship is more careful accurate, the convolutional layer additionally added in order to feature selecting and
Reduce dimension;Simultaneously in order to adapt to the experimental situation of practical inland navigation craft image object detection, improve ship detecting precision and
Speed only extracts the characteristic pattern that " conv5_3 " convolutional layer obtains in VGG16 networks.
Beneficial effects of the present invention are:
In order to ship video extraction target ship, the present invention has considered each frame of ship video, frame and frame
Feature is combined with contacting, and by image shallow-layer feature with high-level characteristic, is carried out ship conspicuousness and is judged detection target.Using regarding
The frame difference of frequency obtains the moving region of ship, and combines the initial position of ship together as ROI region, compared to direct
Whole pictures are detected, the interference in most of background waters can be excluded, reduce the detection zone of each frame, so as to every
The feature of a ROI region extraction is finer, and the positioning of target ship is more accurate.In addition, residual error net is added in VGG16 networks
Network basic structure, since the shortcut connection structures are equivalent to the simple operation performed similar to identical mapping,
Additional parameter will not be generated, computation complexity will not be increased, and can preferably optimize network, reduce whole network
Training error, the characteristics of image for making extraction are more rich abstract;Using a large amount of practical ship picture and ImageNet data sets come
Training network can enhance the feature learning ability of network, and network is made also preferably to be carried when in face of different inland river scenes
Take the high-level characteristic of picture.Wiping out background interference extraction ROI region, and high-level characteristic is combined with the shallow-layer feature of ROI region
Get up to judge ROI region conspicuousness, both overcome the easy to be incomplete by background interference detection ship of traditional shallow-layer feature detection ship
The problems such as, but overcome only with convolutional neural networks detect and generate position inaccurate the problems such as, therefore the present invention to complexity
The adaptability of inland river moving ship scene is stronger, detects accuracy higher, and error smaller moves so as to solve under the scene of inland river
The problem of ship video detection is inaccurate.
Description of the drawings
Fig. 1 is the method for the present invention flow chart.
Fig. 2 is the schematic diagram for extracting frame difference in the method for the present invention and forming ROI region.
Fig. 3 is to the modified example of VGG16 networks in the method for the present invention.
Specific embodiment
The present invention gives a kind of ship video detecting method merged based on frame difference with convolutional neural networks, to ship
Oceangoing ship video asks for frame difference, and obtains the ship that needs detect based on the vessel position that the frame difference and former frame detect
There may be regions, and as ROI region, shallow-layer feature is extracted to ROI region division of cells domain, recycle modified with training
VGG16 networks to each frame image zooming-out high-level characteristic, it is defeated that shallow-layer feature with high-level characteristic is cascaded as a characteristic vector
Enter subsequent softmax graders, to judge ROI region ship significance, so as to obtain final detection result.
The invention mainly comprises four parts:Video is carried out to inhibit the pretreatment such as noise and image sharpening, extraction ship
Video frame difference obtains ROI region and extracts shallow-layer feature, extracts high-level characteristic using the VGG16 networks of modification, judges ROI
Region ship significance simultaneously extracts ship target.The implementation of the present invention is specifically described below, as shown in Figure 1, including the following steps:
Step S1:The ship video under the scene of inland river is inputted, there are ripple, the water surface are reflective not due to the waters of ship running
The shortcomings of smooth, ship edge is easily blurred by the water surface needs each frame to video first to pre-process.Video is used first
Low-pass filtering eliminates the high-frequency noise interference in image.Each frame can fog after filtering, need to carry out image sharpening again, use
Sobel operators and morphological dilations method processing picture so that the edge, profile and detail section of ship are apparent, enhancing comparison
Degree reduces interference of the water surface to ship boundary, convenient for being subsequently detected;Then each frame picture size of video is adjusted to
224 × 224, reach requirement of the VGG16 networks to picture.
Step S2:Difference between frame before and after calculating with reference to the vessel position that former frame detects, obtains a later frame image
ROI region, and extract shallow-layer feature:
S2.1) to the first frame image of ship video, N number of pixel dot center is randomly selected first, by N number of pixel center week
The pixel enclosed is divided into N number of cluster areas by distance, includes RGB, LAB to each extracted region, hsv color space is averaged
Shallow-layer characteristics of image including chromaticity, color histogram feature, Gabor filter response characteristic and spatial position feature,
Then VGG16 networks that the are input of first frame image is modified and training extract conv5-3 layers of high-level characteristic, by shallow-layer
Characteristics of image is stretched as one-dimensional characteristic vector with high-level characteristic and splices respectively, a total characteristic vector is obtained, by the vector
The follow-up full articulamentum of input and softmax graders, judge significance, so as to extract first frame image to each cluster areas
In ship;
S2.2) to the subsequent frame in video in addition to first frame, f is usedn(x, y) represents former frame, fn+1After (x, y) is represented
One frame figure, (x, y) represent the position of pixel, since the size of each frame figure of video is identical, former frame are subtracted simultaneously with a later frame
It takes absolute value, obtains the variation of corresponding pixel points gray value, i.e.,
Dn+1(x, y)=| fn+1(x,y)-fn(x,y)| (1)
A threshold value T is set, by Dn+1The part of (x, y) more than T corresponds to fn+1In (x, y) image, it is believed that be that ship moves
Region;And by Dn+1The part of (x, y) less than T is regarded as not changed background parts, so as to obtain the ship between front and rear frame
Variation, then makees connectivity analysis, by Dn+1The part of (x, y) more than T is connected into an enclosed region, as shown in Fig. 2, (a) figure
The target location of the ellipse representation former frame on the left side, the position of the ellipse representation next frame target on the right, shadow region represent two
The all pixels point of interframe target location region of variation;(b) figure is represented by being treated to (a) figure as the target that connectivity analysis obtains
ROI region is detected, finally show that a later frame needs to detect the ROI region of ship;
S2.3) in view of no motion of situation may be stopped in short-term with the presence of part ship, the ship that former frame is detected
The position of oceangoing ship is also marked in the corresponding position of a later frame, which is also served as to need the ROI region detected, with reference to
S2.2 the ROI region in) obtains total ROI region that each frame needs detect;
S2.4 centered on) choosing N number of pixel to ROI region, it will be close to the surrounding pixel point of each pixel center point
N number of region is marked off by distance, RGB color histogram, LAB color histograms, hsv color Nogata are included to each extracted region
Figure, Gabor filter response, maximum Gabor respond including shallow-layer feature, by these features according to RGB color histogram,
The sequence that LAB color histograms, hsv color histogram, Gabor filter response, maximum Gabor are responded is expressed as a feature
Vector.
Step S3:VGG16 networks are changed, then with the ship picture and ImageNet data sets pair under practical inland river scene
Modified VGG16 networks are trained, for extracting high-level characteristic to the ROI region in S2:
S3.1) in view of ship video may details it is not apparent enough, and while being detected to ROI region, needs accurate feature,
It modifies to VGG16 networks, makes the feature detection result that network extracts more preferable, more rich feature is provided, reduce training and miss
Difference.Specially:Conv3 convolutional layers to VGG16, conv4 layers, each convolutional layer in conv5 convolutional layers adds two-by-two
Shortcut connection form residual error network infrastructure, make the feature that VGG16 networks extract more accurate, reduce training
Error enhances network model generalization ability;The convolution kernel of one 1 × 1 is additionally added behind conv5-3 convolutional layers simultaneously, with
It is reduced convenient for feature selecting and dimension.Then, it goes to train with the ship picture under practical inland river scene and ImageNet data sets
The VGG16 networks extract feature, to make the spy extracted with trained VGG16 networks to each frame for inputting ship video
Sign is more convenient for judging conspicuousness, only extracts the high-level characteristic of each frame of video that conv5-3 convolutional layers obtain.As shown in figure 3,
(a) figure represents that, to the modification processes of entire VGG16 networks, (b) figure represents the explanation to dotted portion in (a) figure, will upper one
The input of convolutional layer inputs after being multiplied with a projection matrix again in the Relu layers after next convolutional layer, to solve front and rear dimension
The problem of not waiting, cannot being directly connected to;
S3.2 it) with the ship picture making data set under practical inland river scene, is repaiied with reference to ImageNet data sets to train
VGG16 network models after changing.Then each frame picture of video is inputted into the VGG16 networks, extracts conv5-3 convolution in network
The feature that obtains of layer, and feature is stretched as a n dimensional vector n, as subsequently needing to judge the high-level characteristic that significance detects ship
Vector;
S3.3 it) is lost to assess the output of VGG16 convolutional neural networks using the cross entropy of softmax graders, calculate
Formula is as follows:
In formula, 0 and 1 represents non-significant region and marking area, z respectively0And z1Represent training data each label (i.e. 0 He
1) score value, L represent loss value;
Step S4:The high-level characteristic vector that ROI region shallow-layer characteristic vector that step S2 is obtained, step S3 are obtained splices
For a characteristic vector, then subsequent two full articulamentums with 1024 nodes of input input softmax classification again
Device judges the conspicuousness of ROI region, obtains the ship Saliency maps of ROI region, detects target ship.
Claims (3)
1. based on the ship video detecting method that frame difference is merged with convolutional neural networks, it is characterized in that being regarded according to river transport monitoring
Frequency carries out ship detecting, includes the following steps:
Step S1, makees the video of input image preprocessing, and the pretreatment includes filtering and eliminating noise and image sharpening;
Step S2, asks for video frame difference, and the vessel position that former frame detects show that the ship that needs detect may be deposited
In region, as ROI region, shallow-layer feature is extracted to ROI region division of cells domain;
Step S3 changes VGG16 network models:Conv3 convolutional layers to VGG16, conv4 layers, it is each in conv5 convolutional layers
Convolutional layer adds shortcut connection two-by-two, forms residual error network infrastructure;Simultaneously after conv5-3 convolutional layers
The convolution kernel of one 1 × 1 is added outside denomination;Then, with the ship picture under the practical inland river scene obtained by video and
ImageNet data sets go to train the VGG16 networks, and each frame for inputting ship video is extracted with trained VGG16 networks
High-level characteristic only extracts the high-level characteristic of each frame of video that conv5-3 convolutional layers obtain;
Step S4 using the significance of characteristic vector prediction ROI region region-by-region extracted, obtains the conspicuousness of ROI region
Figure extracts ship target.
2. the ship video detecting method according to claim 1 merged based on frame difference with convolutional neural networks,
It is characterized in that step S2 is specially:
Difference is asked between the frame and frame of video, former frame picture is subtracted with a later frame picture, obtains the ash of corresponding pixel points
Angle value is poor, sets a threshold value, and the part that difference is less than to threshold value is considered as not changed region, i.e. background area, and poor
Value be more than threshold value part be target ship movement region of variation, connectivity analysis is made to the region, obtain there may be
The ROI region of ship, while the vessel position that previous frame is detected also is labeled as ROI region;Entire ROI region is selected at random
N number of pixel is taken, the pixel around this N number of pixel is divided into N number of zonule by distance, to each extracted region packet
RGB color histogram, LAB color histograms, hsv color histogram, Gabor filter is included to respond, including maximum Gabor responses
Shallow-layer feature.
3. the ship video detecting method according to claim 1 merged based on frame difference with convolutional neural networks,
It is characterized in that step S4 is specially:Using the ship picture training softmax graders under practical inland river scene, then by each frame
The ROI region shallow-layer feature and the high-level characteristic of the frame extracted is stretched as a n dimensional vector n, and be spliced into a Characteristic Vectors respectively
Amount inputs subsequent two full articulamentums, then inputs last softmax graders, so as to judge that ship is notable to ROI region
Property, extract the ship in video.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711226281.2A CN108229319A (en) | 2017-11-29 | 2017-11-29 | The ship video detecting method merged based on frame difference with convolutional neural networks |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711226281.2A CN108229319A (en) | 2017-11-29 | 2017-11-29 | The ship video detecting method merged based on frame difference with convolutional neural networks |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108229319A true CN108229319A (en) | 2018-06-29 |
Family
ID=62653558
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711226281.2A Pending CN108229319A (en) | 2017-11-29 | 2017-11-29 | The ship video detecting method merged based on frame difference with convolutional neural networks |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108229319A (en) |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109063630A (en) * | 2018-07-27 | 2018-12-21 | 北京以萨技术股份有限公司 | A kind of fast vehicle detection method based on separable convolution technique and frame difference compensation policy |
CN109271856A (en) * | 2018-08-03 | 2019-01-25 | 西安电子科技大学 | Remote sensing image object detection method based on expansion residual error convolution |
CN109389593A (en) * | 2018-09-30 | 2019-02-26 | 内蒙古科技大学 | A kind of detection method, device, medium and the equipment of infrared image Small object |
CN109858481A (en) * | 2019-01-09 | 2019-06-07 | 杭州电子科技大学 | A kind of Ship Target Detection method based on the detection of cascade position sensitivity |
CN109886114A (en) * | 2019-01-18 | 2019-06-14 | 杭州电子科技大学 | A kind of Ship Target Detection method based on cluster translation feature extraction strategy |
CN109903339A (en) * | 2019-03-26 | 2019-06-18 | 南京邮电大学 | A kind of video group personage's position finding and detection method based on multidimensional fusion feature |
CN109948557A (en) * | 2019-03-22 | 2019-06-28 | 中国人民解放军国防科技大学 | Smoke detection method with multi-network model fusion |
CN110717575A (en) * | 2018-07-13 | 2020-01-21 | 奇景光电股份有限公司 | Frame buffer free convolutional neural network system and method |
CN110879950A (en) * | 2018-09-06 | 2020-03-13 | 北京市商汤科技开发有限公司 | Multi-stage target classification and traffic sign detection method and device, equipment and medium |
CN111103977A (en) * | 2019-12-09 | 2020-05-05 | 武汉理工大学 | Processing method and system for auxiliary driving data of ship |
CN111339893A (en) * | 2020-02-21 | 2020-06-26 | 哈尔滨工业大学 | Pipeline detection system and method based on deep learning and unmanned aerial vehicle |
CN111783524A (en) * | 2020-05-19 | 2020-10-16 | 普联国际有限公司 | Scene change detection method and device, storage medium and terminal equipment |
CN112257667A (en) * | 2020-11-12 | 2021-01-22 | 珠海大横琴科技发展有限公司 | Small ship detection method and device, electronic equipment and storage medium |
CN112270284A (en) * | 2020-11-06 | 2021-01-26 | 南京斌之志网络科技有限公司 | Lighting facility monitoring method and system and electronic equipment |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140079118A1 (en) * | 2012-09-14 | 2014-03-20 | Texas Instruments Incorporated | Region of Interest (ROI) Request and Inquiry in a Video Chain |
CN106709447A (en) * | 2016-12-21 | 2017-05-24 | 华南理工大学 | Abnormal behavior detection method in video based on target positioning and characteristic fusion |
CN107103303A (en) * | 2017-04-27 | 2017-08-29 | 昆明理工大学 | A kind of pedestrian detection method based on GMM backgrounds difference and union feature |
CN107292247A (en) * | 2017-06-05 | 2017-10-24 | 浙江理工大学 | A kind of Human bodys' response method and device based on residual error network |
-
2017
- 2017-11-29 CN CN201711226281.2A patent/CN108229319A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140079118A1 (en) * | 2012-09-14 | 2014-03-20 | Texas Instruments Incorporated | Region of Interest (ROI) Request and Inquiry in a Video Chain |
CN106709447A (en) * | 2016-12-21 | 2017-05-24 | 华南理工大学 | Abnormal behavior detection method in video based on target positioning and characteristic fusion |
CN107103303A (en) * | 2017-04-27 | 2017-08-29 | 昆明理工大学 | A kind of pedestrian detection method based on GMM backgrounds difference and union feature |
CN107292247A (en) * | 2017-06-05 | 2017-10-24 | 浙江理工大学 | A kind of Human bodys' response method and device based on residual error network |
Non-Patent Citations (1)
Title |
---|
方相如: "基于运动特征的视频显著性检测算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110717575A (en) * | 2018-07-13 | 2020-01-21 | 奇景光电股份有限公司 | Frame buffer free convolutional neural network system and method |
CN110717575B (en) * | 2018-07-13 | 2022-07-26 | 奇景光电股份有限公司 | Frame buffer free convolutional neural network system and method |
CN109063630A (en) * | 2018-07-27 | 2018-12-21 | 北京以萨技术股份有限公司 | A kind of fast vehicle detection method based on separable convolution technique and frame difference compensation policy |
CN109063630B (en) * | 2018-07-27 | 2022-04-26 | 以萨技术股份有限公司 | Rapid vehicle detection method based on separable convolution technology and frame difference compensation strategy |
CN109271856B (en) * | 2018-08-03 | 2021-09-03 | 西安电子科技大学 | Optical remote sensing image target detection method based on expansion residual convolution |
CN109271856A (en) * | 2018-08-03 | 2019-01-25 | 西安电子科技大学 | Remote sensing image object detection method based on expansion residual error convolution |
CN110879950A (en) * | 2018-09-06 | 2020-03-13 | 北京市商汤科技开发有限公司 | Multi-stage target classification and traffic sign detection method and device, equipment and medium |
CN109389593A (en) * | 2018-09-30 | 2019-02-26 | 内蒙古科技大学 | A kind of detection method, device, medium and the equipment of infrared image Small object |
CN109858481A (en) * | 2019-01-09 | 2019-06-07 | 杭州电子科技大学 | A kind of Ship Target Detection method based on the detection of cascade position sensitivity |
CN109886114A (en) * | 2019-01-18 | 2019-06-14 | 杭州电子科技大学 | A kind of Ship Target Detection method based on cluster translation feature extraction strategy |
CN109948557A (en) * | 2019-03-22 | 2019-06-28 | 中国人民解放军国防科技大学 | Smoke detection method with multi-network model fusion |
CN109948557B (en) * | 2019-03-22 | 2022-04-22 | 中国人民解放军国防科技大学 | Smoke detection method with multi-network model fusion |
CN109903339A (en) * | 2019-03-26 | 2019-06-18 | 南京邮电大学 | A kind of video group personage's position finding and detection method based on multidimensional fusion feature |
CN111103977B (en) * | 2019-12-09 | 2021-06-01 | 武汉理工大学 | Processing method and system for auxiliary driving data of ship |
CN111103977A (en) * | 2019-12-09 | 2020-05-05 | 武汉理工大学 | Processing method and system for auxiliary driving data of ship |
CN111339893A (en) * | 2020-02-21 | 2020-06-26 | 哈尔滨工业大学 | Pipeline detection system and method based on deep learning and unmanned aerial vehicle |
CN111339893B (en) * | 2020-02-21 | 2022-11-22 | 哈尔滨工业大学 | Pipeline detection system and method based on deep learning and unmanned aerial vehicle |
CN111783524A (en) * | 2020-05-19 | 2020-10-16 | 普联国际有限公司 | Scene change detection method and device, storage medium and terminal equipment |
CN111783524B (en) * | 2020-05-19 | 2023-10-17 | 普联国际有限公司 | Scene change detection method and device, storage medium and terminal equipment |
CN112270284A (en) * | 2020-11-06 | 2021-01-26 | 南京斌之志网络科技有限公司 | Lighting facility monitoring method and system and electronic equipment |
CN112270284B (en) * | 2020-11-06 | 2021-12-03 | 奥斯福集团有限公司 | Lighting facility monitoring method and system and electronic equipment |
CN112257667A (en) * | 2020-11-12 | 2021-01-22 | 珠海大横琴科技发展有限公司 | Small ship detection method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108229319A (en) | The ship video detecting method merged based on frame difference with convolutional neural networks | |
Broggi | Robust real-time lane and road detection in critical shadow conditions | |
Shin et al. | Vision-based navigation of an unmanned surface vehicle with object detection and tracking abilities | |
CN108830171B (en) | Intelligent logistics warehouse guide line visual detection method based on deep learning | |
Anishiya et al. | Number plate recognition for indian cars using morphological dilation and erosion with the aid of ocrs | |
CN109543632A (en) | A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features | |
CN104601964A (en) | Non-overlap vision field trans-camera indoor pedestrian target tracking method and non-overlap vision field trans-camera indoor pedestrian target tracking system | |
CN101334836A (en) | License plate positioning method incorporating color, size and texture characteristic | |
CN110781744A (en) | Small-scale pedestrian detection method based on multi-level feature fusion | |
CN110197494A (en) | A kind of pantograph contact point real time detection algorithm based on monocular infrared image | |
Bertozzi et al. | IR pedestrian detection for advanced driver assistance systems | |
CN104517095A (en) | Head division method based on depth image | |
Küçükmanisa et al. | Real-time illumination and shadow invariant lane detection on mobile platform | |
Muthalagu et al. | Vehicle lane markings segmentation and keypoint determination using deep convolutional neural networks | |
Zhu et al. | Fast detection of moving object based on improved frame-difference method | |
CN114677479A (en) | Natural landscape multi-view three-dimensional reconstruction method based on deep learning | |
Sun et al. | IRDCLNet: Instance segmentation of ship images based on interference reduction and dynamic contour learning in foggy scenes | |
Zhang et al. | Road marking segmentation based on siamese attention module and maximum stable external region | |
CN109658523A (en) | The method for realizing each function operation instruction of vehicle using the application of AR augmented reality | |
CN115147450B (en) | Moving target detection method and detection device based on motion frame difference image | |
Meem et al. | Zebra-crossing detection and recognition based on flood fill operation and uniform local binary pattern | |
JP7246104B2 (en) | License plate identification method based on text line identification | |
CN114926456A (en) | Rail foreign matter detection method based on semi-automatic labeling and improved deep learning | |
Zhu et al. | Monocular depth prediction through continuous 3D loss | |
Zhou et al. | Real-time detection and spatial segmentation of difference image motion changes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20180629 |
|
WD01 | Invention patent application deemed withdrawn after publication |