WO2019101220A1 - 基于深度学习网络和均值漂移的船只自动跟踪方法及系统 - Google Patents
基于深度学习网络和均值漂移的船只自动跟踪方法及系统 Download PDFInfo
- Publication number
- WO2019101220A1 WO2019101220A1 PCT/CN2018/120294 CN2018120294W WO2019101220A1 WO 2019101220 A1 WO2019101220 A1 WO 2019101220A1 CN 2018120294 W CN2018120294 W CN 2018120294W WO 2019101220 A1 WO2019101220 A1 WO 2019101220A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vessel
- target
- time
- ship
- tracking
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 238000013135 deep learning Methods 0.000 title claims abstract description 15
- 238000001514 detection method Methods 0.000 claims abstract description 39
- 238000004364 calculation method Methods 0.000 claims abstract description 29
- 238000013527 convolutional neural network Methods 0.000 claims abstract description 20
- 238000012549 training Methods 0.000 claims abstract description 19
- 238000013528 artificial neural network Methods 0.000 claims abstract description 18
- 238000007781 pre-processing Methods 0.000 claims abstract description 10
- 230000008859 change Effects 0.000 claims description 6
- 238000013480 data collection Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 6
- 230000008569 process Effects 0.000 abstract description 21
- 238000012544 monitoring process Methods 0.000 abstract description 8
- 230000006870 function Effects 0.000 description 7
- 238000011176 pooling Methods 0.000 description 5
- 230000004913 activation Effects 0.000 description 4
- 238000011160 research Methods 0.000 description 4
- 230000002159 abnormal effect Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 238000012360 testing method Methods 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 238000012804 iterative process Methods 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 230000005856 abnormality Effects 0.000 description 1
- 238000009825 accumulation Methods 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000003706 image smoothing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000002245 particle Substances 0.000 description 1
- 230000008092 positive effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 230000001105 regulatory effect Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration by the use of histogram techniques
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20032—Median filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20076—Probabilistic image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30232—Surveillance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30236—Traffic on road, railway or crossing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02A—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
- Y02A10/00—TECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE at coastal zones; at river basins
- Y02A10/40—Controlling or monitoring, e.g. of flood or hurricane; Forecasting, e.g. risk assessment or mapping
Definitions
- the invention belongs to the technical field of digital image processing, and particularly relates to a method for automatic tracking of vessels based on a deep learning network and mean shift.
- Video tracking is one of the focuses of computer vision research. It is mainly to track the target of interest obtained by the image sensor. Video tracking is the basis of many video applications, such as traffic monitoring, intelligent robots and human-computer interaction. It plays an important role in smart city management, combating illegal crimes and building safe cities and smart cities. It is the focus and difficulty of current video processing research. .
- the research on video tracking systems has been focused on single-target tracking and the only target of interest in tracking and monitoring.
- Single-target tracking has great significance for the processing after the abnormal event is discovered.
- the multi-target tracking method can provide a lot of help to the regulatory authorities.
- the current multi-target tracking mainly includes three methods: prediction based method, matching based method and detection based method.
- the prediction method is regarded as a state estimation problem based on the prediction method.
- the new number processing method is used to optimize the state of the target in the next frame (such as position, color, shape, etc.), and the method mainly includes Filter-based tracking algorithm and subspace learning based algorithm.
- Filter-based algorithms such as Kalman filter, mean shift filter and particle filter mainly learn the feature space of the target according to the previous data, and then perform target location according to the distribution of the image block of the current frame in the feature space.
- the prediction method has the advantage of fast speed in multi-target tracking, but the current frame state is completely dependent on the previous frame tracking result, and the target tracking cannot be automatically performed, and it is difficult to correct the tracking error.
- the multi-target tracking problem is regarded as a template matching problem.
- a template is used to represent the target to be tracked, and the optimal matching result in the next frame is sought.
- the target can be one or a set of graphics blocks, or it can be a global or local feature representation of the target image. This kind of method improves the performance of tracking by tracking the learning process, but it is still difficult to achieve automatic multi-target tracking results, and it is difficult to accurately track in the cover and complex environment.
- the tracking problem is regarded as the target detection problem.
- the target is separated from the background, and the acquired data is used for training.
- the classifier is obtained, and the target frame is automatically detected.
- the image block with the highest score is considered to be target location.
- the detection-based algorithm includes two methods, an offline method and an online method.
- the former uses a pre-trained or initial one-frame or multi-frame data to learn a classifier, and the latter performs new training on the classifier through the sampling data of the current frame.
- the offline learning method is less effective for dynamically changing target tracking, and the online learning method is easy to introduce new errors due to each update, resulting in error accumulation, eventually drifting or even losing the target. How to automatically and accurately track multiple targets automatically, that is, considering the current frame results and referring to different features of the target, further research is needed.
- the object of the present invention is to overcome the shortcomings and deficiencies of the prior art, and to provide a method for automatic tracking of vessels based on a deep learning network and mean shift.
- the technical solution of the present invention is a ship automatic tracking method based on a deep learning network and mean shift, comprising the following steps:
- Step 1 Monitor video data collection, including collecting coastal area surveillance video data under visible light, and extracting each frame image;
- Step 2 performing preprocessing based on the video image obtained in step 1, extracting a positive sample and a negative sample of the vessel target;
- Step 3 through the region-based convolutional neural network method, input the ship target sample in the video into the neural network, and perform model training;
- Step 4 extracting video initial frame data, and performing the ship detection and probability density calculation on the initial time data according to the model obtained by the training in step 3;
- Step 5 Determine the ship tracking result at the current time by the calculation result of the previous moment, including the following processing,
- Step A taking the position of the two ships tracked at time t-1 as the initial position, respectively taking the center coordinate f 0 of each ship position as the initial target position of the ship tracking at time t, with f 0 as the center of the search window, Obtaining a central position coordinate f of the corresponding candidate vessel, calculating a region histogram of the candidate location, and further calculating a probability density;
- Step B the Bhattacharyya coefficient is used to describe the similarity degree between the ship model and the candidate vessel, and the mean shift iterative equation of the center of the region is calculated, so that the model moves continuously in the direction with the largest color change until the last two moving distances are less than the corresponding preset threshold.
- Step C through the region-based convolutional neural network method, the vessel detection is performed on the image at time t, and the num detection coordinates of the t time of the plurality of vessels in the image are obtained. for Calculate its position with the id vessel Overlap,
- Step D by using the neural network detection result of step C, updating the new vessel target appearing at time t, including for each Calculate it and all The maximum overlap of magnitude are obtained O 'max, if O' max than the corresponding threshold value ⁇ 2, it is considered that the ship target vessel at time t appears, it is added to the time t, the tracking results, obtained complete tracking results set.
- S represents the size of the area.
- the probability density calculation in step 4 is implemented by dividing the gray color space of the target region, and obtaining a gray histogram composed of several equal intervals, which is calculated according to the histogram interval of the gray value of the pixel in the target region. Probability Density.
- the invention also provides a vessel automatic tracking system based on deep learning network and mean shift, comprising the following modules:
- the first module is configured to monitor video data collection, including collecting coastal area surveillance video data under visible light, and extracting each frame image;
- a second module configured to perform pre-processing based on the video image obtained by the first module, and extract a positive sample and a negative sample of the vessel target;
- the third module is configured to input a ship target sample in the video into the neural network by using a region-based convolutional neural network method to perform model training;
- the fourth module is configured to extract video initial frame data, and perform vessel detection and probability density calculation on the initial time data according to the model obtained by the third module training;
- the fifth module is configured to determine the current ship tracking result by the calculation result of the previous moment, including the following manner,
- the center coordinate f 0 of each ship position is taken as the initial target position of the ship tracking at time t, and f 0 is the center of the search window, and the corresponding a central position coordinate f of the candidate vessel, calculating a region histogram of the candidate position, and further calculating a probability density;
- the Bhattacharyya coefficient is used to describe the similarity between the ship model and the candidate vessel.
- the mean shift iterative equation of the center of the region is calculated, so that the model moves continuously toward the direction with the largest color change until the last two moving distances are less than the corresponding preset threshold.
- the position of the vessel obtained by the time-average drift result is set to obtain a plurality of vessel positions Boxm t , and the id vessel position is expressed as
- the ship at the time t is detected by the ship, and the num detection coordinates of the t time of the plurality of ships in the image are obtained.
- the num detection coordinates of the t time of the plurality of ships in the image are obtained.
- S represents the size of the area.
- the probability density calculation in the fourth module is implemented by dividing the gray color space of the target region, and obtaining a gray histogram composed of a plurality of equal intervals, according to the histogram interval of the gray value of the pixel in the target region. Calculate the probability density.
- the present invention has the following advantages and positive effects:
- the deep learning method partially adopts a region-based convolutional neural network to perform simultaneous detection of multiple ship targets on a surveillance video image, and the method is fast, efficient, and accurate. For complex scenes such as clouds, cloudy, rain, etc., the detection results are still good, and the method is robust.
- a fast and efficient color histogram-based mean shift method is used to predict the mean shift of the current frame simultaneously for multiple targets tracked in the previous frame, and obtain multiple predicted positions.
- the histogram of the target is not affected by the change of the shape of the target. Therefore, the histogram is used as the target mode, and the matching according to the color distribution has better stability.
- the combination of deep learning network method and mean shift method can better complete the automatic tracking process of multi-ship targets, make the tracking process fully automated, and do not require human-computer interaction process; on the other hand, the stability and accuracy of neural network methods are also Eliminating the errors of the mean shift method and laying the foundation for the tracking of new targets has important market value.
- FIG. 1 is a structural diagram of an application platform system according to an embodiment of the present invention.
- FIG. 3 is a flowchart of a specific policy for acquiring a tracking result based on a deep learning network and a mean shift method after step 3 in the embodiment of the present invention.
- FIG. 4 is a schematic diagram of an iterative process of the mean shift algorithm used in step 5 of step 4 in the embodiment of the present invention.
- the system architecture that can be adopted mainly includes a monitoring video acquisition module, a ship tracking platform, and an application platform.
- the surveillance video acquisition module mainly uses multiple visible light surveillance cameras to acquire video of the seaside area and downlink the data to the vessel tracking module.
- the vessel tracking platform adopts the method of the invention to extract and automatically track the target of the vessel, and to transmit the abnormality of the vessel target to the application platform. According to the specific ship analysis platform, behavior prediction platform, abnormal event processing platform and ship supervision platform in the application platform, the ship's target distribution and action analysis are reasonably predicted and planned, and related tasks are completed.
- the method provided by the implementation of the present invention includes the following steps:
- Input monitoring video data Pre-monitoring video data collection.
- the data collected by the present invention is mainly for monitoring coastal video data under visible light.
- each frame of image needs to be obtained by a decoder or code.
- the acquisition may be performed in advance.
- Data preprocessing and sample preparation preprocessing of video data and preparation of positive and negative samples of vessel targets.
- the preprocessing part mainly uses image smoothing operation, and the present invention uses a median filtering method to smooth each frame of video.
- the positive and negative samples are prepared for the convolutional neural network training in the subsequent steps.
- the specific process is as follows:
- the first step using the video image obtained in step 1, by rotating, panning, etc., the image is a certain expansion.
- Step 2 Obtain the coordinates of the four vertices of the minimum enclosing rectangle of each ship target in the remote sensing image and the corresponding image, and output the image together with all the target coordinates on it as a positive sample.
- the third step is to randomly intercept other regions around the positive sample to obtain the coordinates of the four vertices of the vertical minimum enclosing rectangle. As the negative sample coordinates, the image is output together with the negative sample coordinates thereon.
- Region-based convolutional neural network training Through the region-based convolutional neural network method, the ship target samples in the video are input into the neural network for model training.
- the theoretical process is as follows:
- the positive and negative sample data of the vessel target completed in step 1 is formatted into a structured database format and input into a convolutional neural network for training, and a training result model of the vessel target under the surveillance video is obtained.
- the region-based convolutional neural network consists of multiple alternating convolutional layers, pooled layers, and fully connected layers, mainly using a backpropagation algorithm (BP algorithm) with one input layer, multiple hidden layers, and one output layer.
- BP algorithm backpropagation algorithm
- i is the index value of the input layer unit and j is the index value of the hidden layer unit.
- the network is updated using BP neural network mode.
- a convolutional layer the feature map of the previous layer is convoluted by a learnable convolution kernel, and then an activation function is used to obtain the output feature map.
- the lower layer update algorithm after the specific convolution operation is as follows:
- M j represents all selected sets of input layers. Representing the convolution kernel between input layer i and hidden layer j, Indicates the convolution operation process. Therefore, the formula reflects the operational relationship between the 1st and 1st layers.
- the pooling process is a process of aggregate statistics on features in different locations in a large image. This process greatly reduces feature redundancy and reduces statistical feature dimensions.
- the calculation formula for the pooling layer is as follows:
- D() represents the downsampling function of the pooling process
- each offset corresponds to each output layer.
- the ship is detected at the initial moment in the video data, and the initial time vessel detection position is obtained as the initial position of the tracking. And number each boat target.
- the probability density of each vessel at the initial time t 0 is calculated as the first input result of FIG.
- m 16 in this patent embodiment
- a gray histogram composed of equal intervals. Calculate the initial value of the probability density The formula is as follows:
- A input the t-1 time tracking result, and calculate the probability density of the target position at time t-1 on the image at time t: the position of each vessel tracked at time t-1 is taken as the initial position, and each ship position is respectively The central coordinate f 0 is used as the initial target position of the vessel tracking at time t, and f 0 is the center of the search window, and the central position coordinate f of the corresponding candidate vessel is obtained, and the regional histogram of the candidate position is calculated by using the calculation formula in the fourth step b. Further calculate its probability density as follows:
- K'(x) is the inverse of the K function when the abscissa of the input position is x
- w i is the intermediate process of calculation.
- the whole mean shift method is the process of iterating f k+1 (the center position of the iteration of the k+1th) from f k (the center position of the iteration of the kth time), so that the model continues to the direction in which the color changes the most. Move until the last two movement distances are less than the threshold (this patent uses the threshold 10 -6 ), that is, find the vessel position Boxm t obtained by the mean drift at time t .
- the schematic diagram of the whole mean shift process is shown in Fig. 4.
- the initial value of the target center position in the figure Move a little bit toward the cluster center, move to the position after the first iteration Move to position after nth iteration
- the model performs vessel detection on the image at time t (input image in a convolution-based neural network), which independently obtains the image t-time detection result, that is, the detected ⁇ candidate vessel positions Boxd t .
- num be the detection result number at time t
- the first num ⁇ 1, K
- ⁇ detection coordinates for Calculate it with the first id The degree of overlap, the formula for calculating the overlap is as follows:
- the process provided by the technical solution of the present invention can be automatically operated by using a computer software technology by a person skilled in the art, or the corresponding system can be implemented in a modular manner.
- the embodiment of the invention further provides a vessel automatic tracking system based on a deep learning network and a mean shift, comprising the following modules:
- the first module is configured to monitor video data collection, including collecting coastal area surveillance video data under visible light, and extracting each frame image;
- a second module configured to perform pre-processing based on the video image obtained by the first module, and extract a positive sample and a negative sample of the vessel target;
- the third module is configured to input a ship target sample in the video into the neural network by using a region-based convolutional neural network method to perform model training;
- the fourth module is configured to extract video initial frame data, and perform vessel detection and probability density calculation on the initial time data according to the model obtained by the third module training;
- the fifth module is configured to determine the current ship tracking result by the calculation result of the previous moment, including the following manner,
- the center coordinate f 0 of each ship position is taken as the initial target position of the ship tracking at time t, and f 0 is the center of the search window, and the corresponding a central position coordinate f of the candidate vessel, calculating a region histogram of the candidate position, and further calculating a probability density;
- the Bhattacharyya coefficient is used to describe the similarity between the ship model and the candidate vessel.
- the mean shift iterative equation of the center of the region is calculated, so that the model moves continuously toward the direction with the largest color change until the last two moving distances are less than the corresponding preset threshold.
- the position of the vessel obtained by the time-average drift result is set to obtain a plurality of vessel positions Boxm t , and the id vessel position is expressed as
- the ship at the time t is detected by the ship, and the num detection coordinates of the t time of the plurality of ships in the image are obtained.
- the num detection coordinates of the t time of the plurality of ships in the image are obtained.
Abstract
Description
Claims (4)
- 一种基于深度学习网络和均值漂移的船只自动跟踪方法,包括以下步骤:步骤1,监控视频数据采集,包括采集可见光下的沿海区域监控视频数据,提取每帧图像;步骤2,基于步骤1得到的视频图像进行预处理,提取船只目标的正样本和负样本;步骤3,通过基于区域的卷积神经网络方式,输入视频里的船只目标样本到神经网络中,进行模型训练;步骤4,提取视频初始帧数据,根据步骤3训练所得模型对初始时刻数据进行船只检测和概率密度计算;所述概率密度计算实现方式为,对该目标所在区域进行灰度颜色空间划分,得到若干个相等区间构成的灰度直方图,根据目标区域中像素的灰度值所属直方图区间计算概率密度;步骤5,通过前一时刻的计算结果,确定当前时刻的船只跟踪结果,包括以下处理,步骤A,以t-1时刻的跟踪到的ξ个船只位置作为初始位置,分别将每个船只位置的中心坐标f 0作为t时刻船只跟踪的初始目标位置,以f 0为搜索窗口的中心,得到相应的候选船只的中心位置坐标f,计算候选位置的区域直方图,进一步计算概率密度;步骤B,通过Bhattacharyya系数进行船只模型和候选船只之间的相似程度描述,计算区域中心的均值漂移迭代方程,使得模型向着颜色变化最大的方向不断移动,直到最后两次移动距离小于相应预设阈值,找到t时刻均值漂移结果获得的船只位置,设获得多个船只位置Boxm t,将第id个船只位置表示为
- 一种基于深度学习网络和均值漂移的船只自动跟踪系统,包括以下模块:第一模块,用于监控视频数据采集,包括采集可见光下的沿海区域监控视频数据,提取每帧图像;第二模块,用于基于第一模块得到的视频图像进行预处理,提取船只目标的正样本和负样本;第三模块,用于通过基于区域的卷积神经网络方式,输入视频里的船只目标样本到神经网络中,进行模型训练;第四模块,用于提取视频初始帧数据,根据第三模块训练所得模型对初始时刻数据进行船只检测和概率密度计算;所述概率密度计算实现方式为,对该目标所在区域进行灰度颜色空间划分,得到若干个相等区间构成的灰度直方图,根据目标区域中像素的灰度值所属直方图区间计算概率密度;第五模块,用于通过前一时刻的计算结果,确定当前时刻的船只跟踪结果,包括以下方式,以t-1时刻的跟踪到的ξ个船只位置作为初始位置,分别将每个船只位置的中心坐标f 0作为t时刻船只跟踪的初始目标位置,以f 0为搜索窗口的中心,得到相应的候选船只的中心位置坐标f,计算候选位置的区域直方图,进一步计算概率密度;通过Bhattacharyya系数进行船只模型和候选船只之间的相似程度描述,计算区域中心的均值漂移迭代方程,使得模型向着颜色变化最大的方向不断移动,直到最后两次移动距离小于相应预设阈值,找到t时刻均值漂移结果获得的船只位置,设获得多个船只位置 将第id个船只位置表示为通过基于区域的卷积神经网络方法,对t时刻的图像进行船只检测,设得到图像中多个船只的t时刻第num个检测坐标 对于 计算其与第id个船只位置 的重叠度,记录每个 与其重叠度最大的 相应重叠度大小O max,如果O max小于相应阈值θ 1,则认为该船只位置为虚警,删除
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18880214.4A EP3633615A4 (en) | 2017-12-11 | 2018-12-11 | DEEP LEARNING NETWORK AND METHOD AND AUTOMATIC VESSEL TRACKING SYSTEM BASED ON AVERAGE DRIFT |
JP2019572816A JP6759474B2 (ja) | 2017-12-11 | 2018-12-11 | 深層学習ネットワーク及び平均シフトに基づく船舶自動追跡方法及びシステム |
US16/627,485 US10706285B2 (en) | 2017-12-11 | 2018-12-11 | Automatic ship tracking method and system based on deep learning network and mean shift |
KR1020207000066A KR102129893B1 (ko) | 2017-12-11 | 2018-12-11 | 딥러닝 네트워크 및 평균 이동을 기반으로 하는 선박 자동추적 방법 및 시스템 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711324260.4 | 2017-12-11 | ||
CN201711324260.4A CN107818571B (zh) | 2017-12-11 | 2017-12-11 | 基于深度学习网络和均值漂移的船只自动跟踪方法及系统 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019101220A1 true WO2019101220A1 (zh) | 2019-05-31 |
Family
ID=61605528
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2018/120294 WO2019101220A1 (zh) | 2017-12-11 | 2018-12-11 | 基于深度学习网络和均值漂移的船只自动跟踪方法及系统 |
Country Status (6)
Country | Link |
---|---|
US (1) | US10706285B2 (zh) |
EP (1) | EP3633615A4 (zh) |
JP (1) | JP6759474B2 (zh) |
KR (1) | KR102129893B1 (zh) |
CN (1) | CN107818571B (zh) |
WO (1) | WO2019101220A1 (zh) |
Cited By (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287875A (zh) * | 2019-06-25 | 2019-09-27 | 腾讯科技(深圳)有限公司 | 视频目标的检测方法、装置、电子设备和存储介质 |
CN111161313A (zh) * | 2019-12-16 | 2020-05-15 | 华中科技大学鄂州工业技术研究院 | 一种视频流中的多目标追踪方法及装置 |
CN111444828A (zh) * | 2020-03-25 | 2020-07-24 | 腾讯科技(深圳)有限公司 | 一种模型训练的方法、目标检测的方法、装置及存储介质 |
CN111738081A (zh) * | 2020-05-20 | 2020-10-02 | 杭州电子科技大学 | 一种难样本重训练的深度神经网络声呐目标检测方法 |
CN111950519A (zh) * | 2020-08-27 | 2020-11-17 | 重庆科技学院 | 基于检测与密度估计的双列卷积神经网络人群计数方法 |
CN112183232A (zh) * | 2020-09-09 | 2021-01-05 | 上海鹰觉科技有限公司 | 基于深度学习的船舷号位置定位方法及系统 |
CN112378458A (zh) * | 2020-12-04 | 2021-02-19 | 四川长虹电器股份有限公司 | 一种无人值守采砂船运行监控监测方法 |
CN112465867A (zh) * | 2020-11-30 | 2021-03-09 | 南京莱斯电子设备有限公司 | 一种基于卷积神经网络的红外点目标实时检测跟踪方法 |
CN112507965A (zh) * | 2020-12-23 | 2021-03-16 | 北京海兰信数据科技股份有限公司 | 一种电子瞭望系统的目标识别方法及系统 |
CN112802050A (zh) * | 2021-01-25 | 2021-05-14 | 商汤集团有限公司 | 网络训练、目标跟踪方法及装置、电子设备和存储介质 |
CN112906463A (zh) * | 2021-01-15 | 2021-06-04 | 上海东普信息科技有限公司 | 基于图像的火情检测方法、装置、设备及存储介质 |
CN112949400A (zh) * | 2021-01-26 | 2021-06-11 | 四川大学 | 一种基于深度学习的动物智能实验系统与方法 |
CN113705503A (zh) * | 2021-09-02 | 2021-11-26 | 浙江索思科技有限公司 | 一种基于多模态信息融合的异常行为检测系统及方法 |
CN113792633A (zh) * | 2021-09-06 | 2021-12-14 | 北京工商大学 | 一种基于神经网络和光流法的人脸追踪系统和追踪方法 |
CN114511793A (zh) * | 2020-11-17 | 2022-05-17 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于同步检测跟踪的无人机对地探测方法及系统 |
CN114758363A (zh) * | 2022-06-16 | 2022-07-15 | 四川金信石信息技术有限公司 | 一种基于深度学习的绝缘手套佩戴检测方法和系统 |
CN116300480A (zh) * | 2023-05-23 | 2023-06-23 | 西南科技大学 | 基于改进粒子滤波和生物启发神经网络的放射源搜寻方法 |
CN116385984A (zh) * | 2023-06-05 | 2023-07-04 | 武汉理工大学 | 船舶吃水深度的自动检测方法和装置 |
CN117576164A (zh) * | 2023-12-14 | 2024-02-20 | 中国人民解放军海军航空大学 | 基于特征联合学习的遥感视频海陆运动目标跟踪方法 |
CN117688498A (zh) * | 2024-01-30 | 2024-03-12 | 广州中海电信有限公司 | 基于船岸协同的船舶综合安全状态监控系统 |
Families Citing this family (50)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107818571B (zh) * | 2017-12-11 | 2018-07-20 | 珠海大横琴科技发展有限公司 | 基于深度学习网络和均值漂移的船只自动跟踪方法及系统 |
CN108537826A (zh) * | 2018-05-28 | 2018-09-14 | 深圳市芯汉感知技术有限公司 | 一种基于人工干预的舰船目标跟踪方法 |
CN109145836B (zh) * | 2018-08-28 | 2021-04-16 | 武汉大学 | 基于深度学习网络和卡尔曼滤波的船只目标视频检测方法 |
CN109241913B (zh) * | 2018-09-10 | 2021-05-18 | 武汉大学 | 结合显著性检测和深度学习的船只检测方法及系统 |
CN109684953B (zh) * | 2018-12-13 | 2021-05-14 | 北京小龙潜行科技有限公司 | 基于目标检测和粒子滤波算法进行猪只跟踪的方法及装置 |
CN109859202B (zh) * | 2019-02-18 | 2022-04-12 | 哈尔滨工程大学 | 一种基于usv水面光学目标跟踪的深度学习检测方法 |
CN110232315A (zh) * | 2019-04-29 | 2019-09-13 | 华为技术有限公司 | 目标检测方法和装置 |
CN110532943A (zh) * | 2019-08-28 | 2019-12-03 | 郑州轻工业学院 | 基于Camshift算法与影像逐帧结合的航道状态分析方法 |
CN110660082B (zh) * | 2019-09-25 | 2022-03-08 | 西南交通大学 | 一种基于图卷积与轨迹卷积网络学习的目标跟踪方法 |
CN111611836A (zh) * | 2019-12-27 | 2020-09-01 | 珠海大横琴科技发展有限公司 | 基于背景消除法的船只检测模型训练及船只跟踪方法 |
CN111311647B (zh) * | 2020-01-17 | 2023-07-14 | 长沙理工大学 | 一种基于全局-局部及卡尔曼滤波的目标跟踪方法及装置 |
CN111738112B (zh) * | 2020-06-10 | 2023-07-07 | 杭州电子科技大学 | 基于深度神经网络和自注意力机制的遥感船舶图像目标检测方法 |
CN111667509B (zh) * | 2020-06-11 | 2023-05-26 | 中国矿业大学 | 目标与背景颜色相似下的运动目标自动跟踪方法及系统 |
CN111754545A (zh) * | 2020-06-16 | 2020-10-09 | 江南大学 | 一种基于iou匹配的双滤波器视频多目标跟踪方法 |
CN111814734B (zh) * | 2020-07-24 | 2024-01-26 | 南方电网数字电网研究院有限公司 | 识别刀闸状态的方法 |
US11742901B2 (en) * | 2020-07-27 | 2023-08-29 | Electronics And Telecommunications Research Institute | Deep learning based beamforming method and apparatus |
CN111985363B (zh) * | 2020-08-06 | 2022-05-06 | 武汉理工大学 | 一种基于深度学习框架的船舶名称识别系统及方法 |
CN111898699A (zh) * | 2020-08-11 | 2020-11-06 | 海之韵(苏州)科技有限公司 | 一种船体目标自动检测识别方法 |
CN112183946A (zh) * | 2020-09-07 | 2021-01-05 | 腾讯音乐娱乐科技(深圳)有限公司 | 多媒体内容评估方法、装置及其训练方法 |
CN112417955B (zh) * | 2020-10-14 | 2024-03-05 | 国能大渡河沙坪发电有限公司 | 巡检视频流处理方法及装置 |
CN112270661A (zh) * | 2020-10-19 | 2021-01-26 | 北京宇航系统工程研究所 | 一种基于火箭遥测视频的空间环境监测方法 |
CN112183470B (zh) * | 2020-10-28 | 2022-07-19 | 长江大学 | 一种船舶水尺识别方法、设备及存储介质 |
CN112308881B (zh) * | 2020-11-02 | 2023-08-15 | 西安电子科技大学 | 一种基于遥感图像的舰船多目标跟踪方法 |
CN112378397B (zh) * | 2020-11-02 | 2023-10-10 | 中国兵器工业计算机应用技术研究所 | 无人机跟踪目标的方法、装置及无人机 |
CN112364763B (zh) * | 2020-11-10 | 2024-01-26 | 南京农业大学 | 基于边缘计算的仔猪吃奶行为监测系统 |
CN112258549B (zh) * | 2020-11-12 | 2022-01-04 | 珠海大横琴科技发展有限公司 | 一种基于背景消除的船只目标跟踪方法及装置 |
CN112329707A (zh) * | 2020-11-23 | 2021-02-05 | 珠海大横琴科技发展有限公司 | 基于kcf滤波的无人机影像船只跟踪算法和装置 |
CN112866643A (zh) * | 2021-01-08 | 2021-05-28 | 中国船舶重工集团公司第七0七研究所 | 一种船内关键区域多目标可视化管理系统及方法 |
CN112767445B (zh) * | 2021-01-22 | 2024-04-12 | 东南大学 | 一种用于视频中船舶目标跟踪的方法 |
CN113283279B (zh) * | 2021-01-25 | 2024-01-19 | 广东技术师范大学 | 一种基于深度学习的视频中多目标跟踪方法及装置 |
CN113804470B (zh) * | 2021-04-14 | 2023-12-01 | 山东省计算中心(国家超级计算济南中心) | 一种穴盘育苗流水线的故障检测反馈方法 |
CN113012203B (zh) * | 2021-04-15 | 2023-10-20 | 南京莱斯电子设备有限公司 | 一种复杂背景下高精度多目标跟踪方法 |
CN113269204B (zh) * | 2021-05-17 | 2022-06-17 | 山东大学 | 一种彩色直接部件标记图像的颜色稳定性分析方法及系统 |
CN113313008B (zh) * | 2021-05-26 | 2022-08-05 | 南京邮电大学 | 基于YOLOv3网络和均值漂移的目标与识别跟踪方法 |
CN113313166B (zh) * | 2021-05-28 | 2022-07-26 | 华南理工大学 | 基于特征一致性学习的船舶目标自动标注方法 |
CN113362373B (zh) * | 2021-06-01 | 2023-12-15 | 北京首都国际机场股份有限公司 | 基于双孪生网络的复杂机坪区域内飞机跟踪方法 |
CN113379603B (zh) * | 2021-06-10 | 2024-03-15 | 大连海事大学 | 一种基于深度学习的船舶目标检测方法 |
CN113686314B (zh) * | 2021-07-28 | 2024-02-27 | 武汉科技大学 | 船载摄像头的单目水面目标分割及单目测距方法 |
CN113705502A (zh) * | 2021-09-02 | 2021-11-26 | 浙江索思科技有限公司 | 一种融合目标检测和目标跟踪的船舶目标行为理解系统 |
CN113554123B (zh) * | 2021-09-18 | 2022-01-21 | 江苏禹治流域管理技术研究院有限公司 | 一种基于声光联动的采砂船自动识别方法 |
CN114936413B (zh) * | 2022-04-21 | 2023-06-06 | 哈尔滨工程大学 | 船体外形优化神经网络建模方法及船体外形优化方法 |
CN114627683B (zh) * | 2022-05-13 | 2022-09-13 | 深圳海卫通网络科技有限公司 | 船舶驾驶异常行为的预警方法、装置、设备、介质及系统 |
CN115423844B (zh) * | 2022-09-01 | 2023-04-11 | 北京理工大学 | 一种基于多模块联合的目标跟踪方法 |
KR20240043171A (ko) | 2022-09-26 | 2024-04-03 | 김준연 | 멀티 채널 인공지능망 기반의 선박 검출, 분류 및 추적 시스템 |
CN115690061B (zh) * | 2022-11-08 | 2024-01-05 | 北京国泰星云科技有限公司 | 一种基于视觉的集装箱码头集卡车检测方法 |
CN116303523B (zh) * | 2022-11-30 | 2023-10-17 | 杭州数聚链科技有限公司 | 一种货船自动识别采样方法及系统 |
CN115797411B (zh) * | 2023-01-17 | 2023-05-26 | 长江勘测规划设计研究有限责任公司 | 一种利用机器视觉在线识别水电站电缆桥架形变的方法 |
CN116188519B (zh) * | 2023-02-07 | 2023-10-03 | 中国人民解放军海军航空大学 | 一种基于视频卫星的舰船目标运动状态估计方法及系统 |
CN115859485B (zh) * | 2023-02-27 | 2023-05-23 | 青岛哈尔滨工程大学创新发展中心 | 一种基于船舶外形特征的流线种子点选取方法 |
CN116823872B (zh) * | 2023-08-25 | 2024-01-26 | 尚特杰电力科技有限公司 | 基于目标追踪和图像分割的风机巡检方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102081801A (zh) * | 2011-01-26 | 2011-06-01 | 上海交通大学 | 多特征自适应融合船舶跟踪和航迹检测方法 |
CN104809917A (zh) * | 2015-03-23 | 2015-07-29 | 南通大学 | 船舶实时跟踪监控方法 |
CN106372590A (zh) * | 2016-08-29 | 2017-02-01 | 江苏科技大学 | 一种基于机器视觉的海面船只智能跟踪系统及其方法 |
CN106910204A (zh) * | 2016-12-30 | 2017-06-30 | 中国人民解放军空军预警学院监控系统工程研究所 | 一种对海面船只自动跟踪识别的方法和系统 |
CN107818571A (zh) * | 2017-12-11 | 2018-03-20 | 珠海大横琴科技发展有限公司 | 基于深度学习网络和均值漂移的船只自动跟踪方法及系统 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7899253B2 (en) * | 2006-09-08 | 2011-03-01 | Mitsubishi Electric Research Laboratories, Inc. | Detecting moving objects in video by classifying on riemannian manifolds |
US8374388B2 (en) * | 2007-12-28 | 2013-02-12 | Rustam Stolkin | Real-time tracking of non-rigid objects in image sequences for which the background may be changing |
GB2492248B (en) * | 2008-03-03 | 2013-04-10 | Videoiq Inc | Dynamic object classification |
GB0818561D0 (en) * | 2008-10-09 | 2008-11-19 | Isis Innovation | Visual tracking of objects in images, and segmentation of images |
US8401239B2 (en) * | 2009-03-30 | 2013-03-19 | Mitsubishi Electric Research Laboratories, Inc. | Object tracking with regressing particles |
US8335348B2 (en) * | 2009-12-14 | 2012-12-18 | Indian Institute Of Technology Bombay | Visual object tracking with scale and orientation adaptation |
US8712096B2 (en) * | 2010-03-05 | 2014-04-29 | Sri International | Method and apparatus for detecting and tracking vehicles |
WO2012006578A2 (en) * | 2010-07-08 | 2012-01-12 | The Regents Of The University Of California | End-to-end visual recognition system and methods |
US8615105B1 (en) * | 2010-08-31 | 2013-12-24 | The Boeing Company | Object tracking system |
JP5719230B2 (ja) * | 2011-05-10 | 2015-05-13 | キヤノン株式会社 | 物体認識装置、物体認識装置の制御方法、およびプログラム |
US9832452B1 (en) * | 2013-08-12 | 2017-11-28 | Amazon Technologies, Inc. | Robust user detection and tracking |
CN104182772B (zh) * | 2014-08-19 | 2017-10-24 | 大连理工大学 | 一种基于深度学习的手势识别方法 |
CN105184271A (zh) * | 2015-09-18 | 2015-12-23 | 苏州派瑞雷尔智能科技有限公司 | 一种基于深度学习的车辆自动检测方法 |
US10776926B2 (en) * | 2016-03-17 | 2020-09-15 | Avigilon Corporation | System and method for training object classifier by machine learning |
CN107291232A (zh) * | 2017-06-20 | 2017-10-24 | 深圳市泽科科技有限公司 | 一种基于深度学习与大数据的体感游戏交互方法及系统 |
CN107358176A (zh) * | 2017-06-26 | 2017-11-17 | 武汉大学 | 基于高分遥感影像区域信息和卷积神经网络的分类方法 |
-
2017
- 2017-12-11 CN CN201711324260.4A patent/CN107818571B/zh active Active
-
2018
- 2018-12-11 EP EP18880214.4A patent/EP3633615A4/en active Pending
- 2018-12-11 KR KR1020207000066A patent/KR102129893B1/ko active IP Right Grant
- 2018-12-11 WO PCT/CN2018/120294 patent/WO2019101220A1/zh active Application Filing
- 2018-12-11 US US16/627,485 patent/US10706285B2/en active Active
- 2018-12-11 JP JP2019572816A patent/JP6759474B2/ja active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102081801A (zh) * | 2011-01-26 | 2011-06-01 | 上海交通大学 | 多特征自适应融合船舶跟踪和航迹检测方法 |
CN104809917A (zh) * | 2015-03-23 | 2015-07-29 | 南通大学 | 船舶实时跟踪监控方法 |
CN106372590A (zh) * | 2016-08-29 | 2017-02-01 | 江苏科技大学 | 一种基于机器视觉的海面船只智能跟踪系统及其方法 |
CN106910204A (zh) * | 2016-12-30 | 2017-06-30 | 中国人民解放军空军预警学院监控系统工程研究所 | 一种对海面船只自动跟踪识别的方法和系统 |
CN107818571A (zh) * | 2017-12-11 | 2018-03-20 | 珠海大横琴科技发展有限公司 | 基于深度学习网络和均值漂移的船只自动跟踪方法及系统 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3633615A4 * |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110287875A (zh) * | 2019-06-25 | 2019-09-27 | 腾讯科技(深圳)有限公司 | 视频目标的检测方法、装置、电子设备和存储介质 |
CN110287875B (zh) * | 2019-06-25 | 2022-10-21 | 腾讯科技(深圳)有限公司 | 视频目标的检测方法、装置、电子设备和存储介质 |
CN111161313A (zh) * | 2019-12-16 | 2020-05-15 | 华中科技大学鄂州工业技术研究院 | 一种视频流中的多目标追踪方法及装置 |
CN111444828A (zh) * | 2020-03-25 | 2020-07-24 | 腾讯科技(深圳)有限公司 | 一种模型训练的方法、目标检测的方法、装置及存储介质 |
CN111738081A (zh) * | 2020-05-20 | 2020-10-02 | 杭州电子科技大学 | 一种难样本重训练的深度神经网络声呐目标检测方法 |
CN111950519A (zh) * | 2020-08-27 | 2020-11-17 | 重庆科技学院 | 基于检测与密度估计的双列卷积神经网络人群计数方法 |
CN112183232A (zh) * | 2020-09-09 | 2021-01-05 | 上海鹰觉科技有限公司 | 基于深度学习的船舷号位置定位方法及系统 |
CN114511793A (zh) * | 2020-11-17 | 2022-05-17 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于同步检测跟踪的无人机对地探测方法及系统 |
CN114511793B (zh) * | 2020-11-17 | 2024-04-05 | 中国人民解放军军事科学院国防科技创新研究院 | 一种基于同步检测跟踪的无人机对地探测方法及系统 |
CN112465867A (zh) * | 2020-11-30 | 2021-03-09 | 南京莱斯电子设备有限公司 | 一种基于卷积神经网络的红外点目标实时检测跟踪方法 |
CN112465867B (zh) * | 2020-11-30 | 2024-01-05 | 南京莱斯电子设备有限公司 | 一种基于卷积神经网络的红外点目标实时检测跟踪方法 |
CN112378458B (zh) * | 2020-12-04 | 2022-06-03 | 四川长虹电器股份有限公司 | 一种无人值守采砂船运行监控监测方法 |
CN112378458A (zh) * | 2020-12-04 | 2021-02-19 | 四川长虹电器股份有限公司 | 一种无人值守采砂船运行监控监测方法 |
CN112507965A (zh) * | 2020-12-23 | 2021-03-16 | 北京海兰信数据科技股份有限公司 | 一种电子瞭望系统的目标识别方法及系统 |
CN112906463A (zh) * | 2021-01-15 | 2021-06-04 | 上海东普信息科技有限公司 | 基于图像的火情检测方法、装置、设备及存储介质 |
CN112802050A (zh) * | 2021-01-25 | 2021-05-14 | 商汤集团有限公司 | 网络训练、目标跟踪方法及装置、电子设备和存储介质 |
CN112802050B (zh) * | 2021-01-25 | 2024-04-16 | 商汤集团有限公司 | 网络训练、目标跟踪方法及装置、电子设备和存储介质 |
CN112949400A (zh) * | 2021-01-26 | 2021-06-11 | 四川大学 | 一种基于深度学习的动物智能实验系统与方法 |
CN112949400B (zh) * | 2021-01-26 | 2022-07-08 | 四川大学 | 一种基于深度学习的动物智能实验系统与方法 |
CN113705503A (zh) * | 2021-09-02 | 2021-11-26 | 浙江索思科技有限公司 | 一种基于多模态信息融合的异常行为检测系统及方法 |
CN113792633B (zh) * | 2021-09-06 | 2023-12-22 | 北京工商大学 | 一种基于神经网络和光流法的人脸追踪系统和追踪方法 |
CN113792633A (zh) * | 2021-09-06 | 2021-12-14 | 北京工商大学 | 一种基于神经网络和光流法的人脸追踪系统和追踪方法 |
CN114758363B (zh) * | 2022-06-16 | 2022-08-19 | 四川金信石信息技术有限公司 | 一种基于深度学习的绝缘手套佩戴检测方法和系统 |
CN114758363A (zh) * | 2022-06-16 | 2022-07-15 | 四川金信石信息技术有限公司 | 一种基于深度学习的绝缘手套佩戴检测方法和系统 |
CN116300480A (zh) * | 2023-05-23 | 2023-06-23 | 西南科技大学 | 基于改进粒子滤波和生物启发神经网络的放射源搜寻方法 |
CN116385984B (zh) * | 2023-06-05 | 2023-09-01 | 武汉理工大学 | 船舶吃水深度的自动检测方法和装置 |
CN116385984A (zh) * | 2023-06-05 | 2023-07-04 | 武汉理工大学 | 船舶吃水深度的自动检测方法和装置 |
CN117576164A (zh) * | 2023-12-14 | 2024-02-20 | 中国人民解放军海军航空大学 | 基于特征联合学习的遥感视频海陆运动目标跟踪方法 |
CN117688498A (zh) * | 2024-01-30 | 2024-03-12 | 广州中海电信有限公司 | 基于船岸协同的船舶综合安全状态监控系统 |
Also Published As
Publication number | Publication date |
---|---|
US10706285B2 (en) | 2020-07-07 |
JP2020526826A (ja) | 2020-08-31 |
JP6759474B2 (ja) | 2020-09-23 |
CN107818571A (zh) | 2018-03-20 |
CN107818571B (zh) | 2018-07-20 |
EP3633615A4 (en) | 2020-09-23 |
US20200160061A1 (en) | 2020-05-21 |
EP3633615A1 (en) | 2020-04-08 |
KR102129893B1 (ko) | 2020-07-03 |
KR20200006167A (ko) | 2020-01-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2019101220A1 (zh) | 基于深度学习网络和均值漂移的船只自动跟踪方法及系统 | |
KR102171122B1 (ko) | 장면의 다차원 특징을 기반으로 하는 선박 탐지 방법 및 시스템 | |
CN110232350B (zh) | 一种基于在线学习的实时水面多运动目标检测跟踪方法 | |
CN109145836B (zh) | 基于深度学习网络和卡尔曼滤波的船只目标视频检测方法 | |
CN113034548B (zh) | 一种适用于嵌入式终端的多目标跟踪方法及其系统 | |
CN101141633B (zh) | 一种复杂场景中的运动目标检测与跟踪方法 | |
Craye et al. | Spatio-temporal semantic segmentation for drone detection | |
Bloisi et al. | Argos—A video surveillance system for boat traffic monitoring in Venice | |
CN112634325B (zh) | 一种无人机视频多目标跟踪方法 | |
CN108804992B (zh) | 一种基于深度学习的人群统计方法 | |
CN112528817B (zh) | 一种基于神经网络的巡检机器人视觉检测及跟踪方法 | |
CN105160649A (zh) | 基于核函数非监督聚类的多目标跟踪方法及系统 | |
CN108776974A (zh) | 一种适用于公共交通场景的实时目标跟踪方法 | |
CN112991391A (zh) | 一种基于雷达信号和视觉融合的车辆检测与跟踪方法 | |
CN110619276B (zh) | 基于无人机移动监控的异常及暴力检测系统和方法 | |
CN104778699B (zh) | 一种自适应对象特征的跟踪方法 | |
CN112541424A (zh) | 复杂环境下行人跌倒的实时检测方法 | |
CN114022910A (zh) | 泳池防溺水监管方法、装置、计算机设备及存储介质 | |
CN116403139A (zh) | 一种基于目标检测的视觉跟踪定位方法 | |
CN111354016A (zh) | 基于深度学习和差异值哈希的无人机舰船跟踪方法及系统 | |
WO2022021661A1 (zh) | 一种基于高斯过程的视觉定位方法、系统及存储介质 | |
CN116862832A (zh) | 一种基于三维实景模型的作业人员定位方法 | |
CN116342645A (zh) | 一种针对游泳馆场景下的多目标跟踪方法 | |
CN115482489A (zh) | 基于改进YOLOv3的配电房行人检测和轨迹追踪方法及系统 | |
CN112541403B (zh) | 一种利用红外摄像头的室内人员跌倒检测方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18880214 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2019572816 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20207000066 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: KR1020207000066 Country of ref document: KR |
|
ENP | Entry into the national phase |
Ref document number: 2018880214 Country of ref document: EP Effective date: 20200102 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |